Tuesday, December 29, 2015

Half the equation and half the definition

There is a lot of confusion over what constitutes Net Neutrality, so much so that parties fiercely on the opposite side of issues both claim to be for it. As an example,  the current controversy over Free Basics has been between Facebook, whose CEO penned an Op-Ed entitled "Free Basics protects net neutrality", and on the opposite side of it is a volunteer coalition, SaveTheInternet (STI), whose entire charter is to protect Net Neutrality. As the Op-Ed from the volunteers suggests, the basic contention between Facebook and the volunteers is a different definition of Net Neutrality. While the concept of Net Neutrality was coined by Tim Wu back in 2003, the definition of what constitutes Net Neutrality has been evolving. 


Let me walk you through the evolution of the definition that the STI coalition is going with, which is widely accepted and which I have arrived at after years of researching the issue. I will explain why Facebook (amongst countless others, they are not solely to blame here) only consider half the equation and thus end up with half the definition.



I'll start with the folk-definition that we started hearing, around 10 years ago:


Folk Definition: All packets must be treated equally

As networking researchers we knew that this definition was not practical and it made little sense to us. Without getting too much into boring details, we knew routers on the Internet did not treat all packets identically (TCP-SYN packets are treated differently from TCP-Data or TCP-Ack packets, UDP packets are treated differently, packets in the tail of the queue are dropped during congestion etc. etc.). However, we also knew what the principle of Net Neutrality was trying to say, and that was the network did not discriminate. So the folk definition needed to be made crisper. The FCC adopted Net Neutrality rules last year and the definition broadly laid out the following principles:

FCC: ISPs will not block or throttle any traffic and will not implement any paid prioritization (no fast lanes)

This definition changes the abstraction from how packets are treated, to how services are treated which is a logical progression. However the FCC missed out in one crucial aspect, and that is not incorporating the concept of differential pricing in its Net Neutrality Principles. Zero Rating, which is a special case of differential pricing,  was not a big problem in the US when the Open Internet order was voted upon, and the FCC preferred a wait and watch approach to it (as a refresher, Zero Rating is the concept where consumers don't pay for the bandwidth of some or all services, and instead the cost of the bandwidth is borne either by the ISP or the content provider). The FCC definition focused on quality of service (QoS) as the determining factor for Net Neutrality, and insisted that all content on the Internet received the same quality of service from ISPs. The intent was to not provide competitive advantage to any service on the Internet, as that was in the best interest of both consumers as well as entrepreneurs. However it missed out in the following way:


As a brief background, in game theory (the mathematical tool we have used in our work on analyzing the issue), the quantity that we focus upon is called Consumer SurplusSurplus is defined as Utility derived from a particular service minus the cost paid to obtain that service. The Utility is a mathematical quantity that models the impact of the QoS obtained for a particular application, and the FCC was absolutely correct in enforcing neutrality there, but the FCC did not model the cost paid in its definition of Net Neutrality (and it is the definition Facebook uses to justify Free Basics as being consistent with Net Neutrality). How much an application costs changes the surplus a consumer obtains, and applications with similar utility (quality) but with differing costs provide different surpluses. In game theoretic models higher surpluses get competitive advantages, thus it is crucial to model the cost aspect of an application to get to a definition of Net Neutrality that works. Differential pricing or Zero Rating of select services absolutely violates the principle of Net Neutrality if we consider the impact on consumer surplus.



Thus, if we only model half the equation, we end up with a definition of Net Neutrality that focuses only on QoS, however if we model the equation fully then the price of the service comes into play. A lot of people only model half the equation (Facebook included) and thus claim that differential pricing (Zero Rating specifically) is fine under Network Neutrality, but that is not true. If we are talking about a true level playing field, then the other half of the equation cannot be ignored.




Access Now, a global non-profit aimed at protecting the digital rights of citizens, has adopted a definition that states the following:


Access Now: Net neutrality requires that the Internet be maintained as an open platform, on which network providers treat all content, applications and services equally, without discrimination.

This definition implies that differential pricing cannot be adopted, but it does not say so explicitly and people (usually differential pricing advocates) can easily ignore the pricing aspect of a service and say Zero Rating is consistent with this definition. To fix this minor issue, and make things explicit, I have proposed the following definition which has received acceptance from academics, policy makers, entrepreneurs and activists alike, and I announced it publicly sometime back:


This definition has the following properties


  1. It incorporates both QoS and Pricing in the definition of Net Neutrality, thus correctly modeling consumer surplus.
  2. It makes explicit the notion that Net Neutrality is not how we treat packets but how we treat competition.
  3. It allows for reasonable traffic management by ISPs without violating Net Neutrality.
  4. It allows differential QoS and/or pricing as long as it is allowed in a non-discriminatory way. ISPs can prioritize all real time traffic (e.g. all voice or all Video Conference traffic in a provider agnostic way) over all non-real time traffic. Similarly all emergency services or health monitoring apps can be prioritized.
  5. It allows creating and differentially pricing entire class of services. For instance, an ISP can create an extremely low latency service and offer it to all games/gamers without discrimination, and that should be fine.The definition permits differentiation between services, but prohibits discrimination within a service.
  6. It ensures a level playing field on the Internet, where upstarts can come in and compete on the basis of ideas.
  7. Lastly, and this is only half in jest, the definition fits in 140 characters including a hashtag.
I am a big believer in the power of capitalism, as I think humans are largely selfish with varying degrees of altruism. However, for capitalism to work for the greater good of society, it is critical that the selfish interests of corporations align with public interests. And that's where regulators step in, using the concept of mechanism design, to introduce a minimal set of regulations that incentivize corporations to act in societal interest. I think the concept of Net Neutrality, defined in the way above, provides the mechanism for the Internet economy to work in the public interest. I hope the right regulations get passed.


Saturday, December 26, 2015

The business of ZeroRating

ZeroRating conversations are dominating Network Neutrality issues these days, whether it is the FreeBasics controversy in India, Binge On by T-Mobile, or Verizon's recent announcement of a plan similar to AT&T's sponsored data. Here are a few thoughts to consider about ZeroRating and why it makes no sense (to me).

If ISPs Zero Rate content, somebody has to pay for the bandwidth. Suppose the Content provider pays for it. Then there is a pricing problem:

  • ISPs cannot charge the content provider a price above the price they charge consumers. Suppose they charge consumers X per MB of data, and they charge content providers X+Y per MB of data. Then, for sufficient traffic where overheads are accounted for, it is cheaper for content providers to send recharge coupons back directly to the customers who used their services. Long term, pricing above the consumer price is not sustainable.
  • ISPs cannot charge the content provider a price below the price they charge consumers. Suppose they charge consumers X per MB of data, and they charge content providers X-Y per MB of data. Then if the plan is truly open, a company like Gigato can come along, buy data in volume and become a virtual ISP. They can funnel traffic to services via their servers (they can remain good guys and not decrypt or store private data), sell the bandwidth to consumers at X-Y/2 and pocket the difference. The ISPs lose out.
Or alternately, the ISP pays for the bandwidth of the content.
  • This opens the possibility of vertical integration, where ISPs ZeroRate their own content, and that is extremely bad for competition. Or ISPs ZeroRate only a select group of content providers, for non-transparent reasons (FreeBasics or Binge On "technical" requirements that make the walled gardens implicitly closed), leading to a fractured experience/Internet for their consumers.

It is not clear to me what the business model is for ZeroRating, where the ISPs make money and provide an Open and Neutral Internet experience for their consumers. Economic issues are really the core of Network Neutrality, and nobody has explained to me how the economic model of ZeroRating remains consistent with Network Neutrality


Friday, February 6, 2015

The most important aspect of the new FCC proposal on Net Neutrality

So it has happened. FCC Chairman Wheeler has announced that he wants to bring broadband (both wired and wireless) under Title-II provisions. Network Neutrality advocates are ecstatic, hailing him as a hero. The fact sheet outlining the FCC proposals is available here.

I found myself initially disappointed, and thought that this might actually be a retrograde step for consumers. The reason for that was the Bright Line Rules that the FCC highlighted: no blocking, no throttling and no paid prioritization were not actual, real problems. ISPs don't block (that's the job of governments), ISPs do not explicitly throttle - there are sophisticated ways of achieving the same result and I don't know of any actual "paid prioritization" (i.e. QoS provided for specific content on routers/switches in exchange for money). The real problem is paid peering which does not come under the purview of the Bright Line Rules.

Additionally, the forbearance that has been explicitly stated includes "no unbundling" which is not pro consumer. The proposal also completely ignores "zero rating" which is the functional equivalent of fast lanes and could be a big problem down the road.

However, something that I missed but went back to when I was reading a post by Jon Brodkin, is the 4th bullet point of the provisions of Title-II that will be enforced. It says:


o Ensures fair access to poles and conduits under Section 224, which would boost the deployment of new broadband networks 

This is the single most important part of the new FCC proposal. My initial reaction has been that FCC has done nothing to promote competition, but I take it back. We have always maintained that competition is the real issue, not network neutrality and there are many others who say the same thing, e.g. from Jon's publication Ars Technica there is an excellent post by Peter Bright saying We don’t need net neutrality; we need competition. Fair access to Poles and Conduits is critical to increasing broadband competition in the US, and that's the real problem to be solved, not enforcement of "Net Neutrality".


Thursday, February 5, 2015

My appearance on BBC World News discussing Net Neutrality


I appeared on the BBC World News program Global with Matthew Amroliwala on February 5th, 2015 and spoke about Net Neutrality and the new FCC proposals. I tried to make the point that Net Neutrality is a symptom and the real issue is (lack of) competition.

The clip below is courtesy the BBC.





Friday, October 17, 2014

We need an "AppleSim" for Wired Internet

Our work has long argued that competition is the key to an Open Internet, not Network Neutrality regulations. The introduction of AppleSim by Apple on their iPads is a very interesting and important development in that regard. As Apple says in their marketing literature:

So whenever you need it, you can choose the plan that works best for you—with no long-term commitments
Apple's market power (ironically) has ensured that they got 3 of the 4 top wireless carriers in the US to sign on as partners. If you are not happy with the service of a provider, you can easily switch.

Now imagine if a similar situation existed for wired Internet access for consumers. Everyone had the equivalent of an "AppleSim" at home. Say in the form of a router, that can easily connect to different wired Internet providers with the click of a button on a web page. You are not happy with Netflix speed by your provider FIOS? Fine - you switch to Comcast. Your ISP blocks Skype video calls? Fine - switch to an ISP that doesn't and is eager to get your business. It would mean the end of peering disputes like the one we have seen in the past that are a result of the market power of last mile monopolies. ISPs would compete with each other to get your business based on the services they offer and the quality of their connection. Right now, the ISPs seem to be competing with each other in exploiting their last mile monopolies to extract revenue from content providers.

Laying down the infrastructure to make this vision of an "AppleSim" for wired access is something complex and involved and for another day. However the equivalent of a company like Apple with the "market power" to flex their muscle and get ISPs to fall in line are municipalities. Municipal Internet is one way to ensure that there is enough competition at the last mile for consumers. Another closely related approach is what Stockholm has done with stokab, they have created a shared last mile infrastructure that different providers can lease and get to customers. We need to get a similar concept going in the US, on a large scale. That would ensure an Open Internet, not difficult to define and impossible to regulate ideals like "Network Neutrality".

Update:

AT&T has announced that when customers activate their AppleSim using the AT&T network they will lock the SIM. There goes easy switching to different providers. A retrograde step by AT&T and Apple should push back.

Sunday, July 13, 2014

Our response to FCC's NPRM on Open Internet (14-28)

The following is the detailed response we have submitted to the FCC on it's NPRM 14-28 Protecting and Promoting the Open Internet (aka Net Neutrality proposals).


Response to the FCC NPRM on Open Internet prepared by Richard Ma (tbma@comp.nus.edu.sg) and Vishal Misra (misra@cs.columbia.edu)
1.     Para. 1: From the first sentence, the proposed rulemaking emphasizes “broadband investment and deployment”. It seems that the scope is really limited to broadband or eyeball ISPs. We want to emphasize that the net neutrality issues are not limited to broadband. Many core issues are involved with transit ISP and content providers as well. When it mentions “open”, does it mean “neutral”? If it is not “open”, is the current Internet “closed”? It seems that “open” is not well-defined. We comment that “open” should be defined more clearly.
2.     Para. 2: “What is the right public policy to ensure that the Internet remains open?” Again, to answer this question, we first need to be clear about the definition of “open”.
3.     Para. 3: The no-blocking rule is too limited. An ISP could give extremely small amount of capacity for certain content traffic, which does not block it; however, the resulting quality might be too poor for users to make use of the content. “Commercially unreasonable actions” needs a clear definition and interpretations.
4.     Para. 4: “the best ways to define, prevent … the practices that threaten an open Internet”. We suggest that the openness should be defined based on the level of market competition. Our work [1] has shown that whether regulations are needed really depends on whether market competition exists or not.
5.     Para. 6: In general, access ISPs do have incentives to maximize their revenue by differentiating services, which might or might not be beneficial for the end-users; however, this is different from being “open” or not. There is a zero-sum game between the edge and access providers, therefore, it ultimately is a profit sharing problem [3, 4]. With respect to the second point, our recent work [2] studies paid prioritization on edge providers and from a social welfare perspective, our results support the use of priority-based pricing and service differentiation rather than imposing net neutrality regulations. With respect to the third point, our work [1] confirms that without market competition, a monopoly ISP does have incentives to make free ordinary services “damaged good”; however, market competition could avoid such a scenario.
6.     Para. 10: “First, we generally propose to retain the definitions and scope of the 2010 rules.” We comment that the scope could be broadened and the definition of “open” should be made more clearly. “Second, … should enhance the transparency rule …” We agree with this which will enhance the competition among the ISPs. “Third, … no-blocking rule … with a revised rationale, in order to ensure that all end users and edge providers can enjoy the use of robust, fast and dynamic Internet access.” It seems that the proposed requirement is a lot more than a no-blocking requirement. “Fourth, … to adhere to an enforceable legal standard of commercially reasonable practices …, asking how harm can best be identified and prohibited and whether certain practices, like paid prioritization, should be barred altogether.” Again, we comment that a more clear definition of “commercially reasonable practice” is needed. In our work [1], we show that paid prioritization does not harm user welfare unless the access ISP is a monopoly which makes a lower-class service “damaged good”. We suggest that a service comparison between the different service classes can be used as a guideline to limit service differentiations. In our recent work [5], we propose to limit the quality difference, rather than the absolute quality level, to ensure the “commercial reasonable actions”. “Fifth, … dispute resolution process”. Our prior work [3, 4] looked into the ISP settlement issues and identify that the Shapley mechanism could be an ideal mechanism to arbitrate disputes among interconnecting ISPs to maximize the social welfare. This can be practically difficult so the correct approach should be to increase competition at the access which obviates the need for any dispute resolution.
7.     Para. 12: “consumers’ hunger for high-value content”. Net neutrality could hurt when low-value content compete with high-value content and reduce them via negative network effect, i.e., congestion. Allowing ISPs’ differentiation could prioritize high-value content to end-users.
8.     Para 13: the original purpose is to “encourage broadband deployment”. Too restrictive a policy will reduce the incentives of broadband providers to deploy in suburban areas.
9.     Para 17: “stop blocking” is easy to achieve, however, the real question is about service differentiation. Could we allow ISPs to limit the throughput of certain applications? Maybe we could not let ISPs to “actively” limit the throughput, but they could maintain different service classes and let edge providers to choose the services. We proposed and studied this type of “passive” service differentiation in our work [1, 2, 5], and our results in general support ISPs to provide such differentiations as long as enough market competition exists.
10.  Para 18: it all boils down to the question whether the network management is reasonable or not. By measuring the relative quality (throughput) between BT traffic and other traffic, as proposed in [5], one could infer whether the network management is reasonable. If we only look at the quality of BT traffic, even it gets low throughput, it might be because the system is limited in capacity and cannot support the demand (not because of ISP’s differentiation).
11.  Para. 20: “nondiscrimination and transparency rules” We suggest that nondiscrimination should be defined to allow passive service differentiation [1, 2, 5], e.g., class-based service differentiation, under which content providers get to choose what service to use.
12.  Para. 21: we want to comment that the 2nd case of “blocking” is a special case of the 3rd scenario of “unreasonable discriminating in transmitting lawful network traffic”. We need a definition of “unreasonable discrimination” and suggest use the quality difference in different services.
13.  Para. 23: “… D.C. Circuit … grant the Commission affirmative authority to encourage and accelerate the deployment of broadband capability to all Americans through, among other things, measures that promote competition in the local telecommunications market or remove barriers to infrastructure investment.” We totally agree with this view. Based on the results of our work [1, 2], we suggest that the FCC should promote competition rather than imposing restrictive rules on ISPs. “the court struck down the anti-blocking and anti-discrimination rules, explaining that the Commission had chosen an impermissible mechanism … imposed per se common carriage requirements … ” Based on our work [5], we suggest that the FCC should not impose strict service “requirements” on the providers. However, they could impose rules to restrict the quality difference among service classes. On the one hand, through this type of regulation, it does not impose a common carriage requirement. On the other hand, this type of regulation is also enforceable and provides ISPs spaces to differentiate services.
14.  Para. 26: we want to comment that certain passive service differentiation does not necessarily mean “restricting edge providers’ ability to reach users …”. Our prior work [1] showed that the “short-term incentive to limit openness” does exist under a monopolistic market. However, if market competition is enough, this behavior will reduce the market share of the ISPs.  We also showed that under market competition, ISPs will have incentives to differentiate services, resulting in an increase in the social welfare.
15.  Para. 27: “… any small costs of imposing the rules were outweighed by the positive effect on network investment from the preservation of the openness that drives the virtuous circle …”. We have mixed feelings on this. A strict rule might reduce the investment incentives of access ISP to deploy capacity to rural areas.
16.  Para. 32: “tremendous growth in the online video markets … revenues from online video services grew 175% from $1.86 to $5.12 billion …” This causes congestion and unbalance in profit sharing, which reduces the investment incentives from access ISPs. Our prior work [3, 4] on Shapley profit-sharing indicates that a re-balancing is needed to compensate the access ISPs from the content side ISPs. This can be achieved in two ways, either by paid-peering arrangements or by increasing competition at the access.
17.  Para. 33: we agree that competition should be encouraged; however, without incentives for ISPs to do service differentiation, they might not have any incentives to deploy infrastructure to rural areas. Also, natural monopoly exists in those areas, because a 2nd provider is basically too expensive. Regulating this natural monopoly could be different from regulating other ISPs; otherwise, no ISPs will be willing to deploy infrastructures in rural areas.
18.   Para. 34: we want to comment on the role of the Internet’s openness in facilitating innovation, economic growth, competition and broadband investment and deployment. Here, we have conflicting roles of competition and broadband investment. Increasing competition on the broadband side might reduce their incentives to invest, but encourage more investments on the edge/content providers. However, lack of investment in infrastructure will eventually hurt the whole ecosystem, innovation and economic growth. We need to balance the different objectives. One solution is to look at both broadband providers and edge providers as a whole Internet supply chain, and think about the revenue/profit sharing among them, so as to maximize the utility of the overall ecosystem. Under such a macro model, we might allow broadband providers to differentiate services and charge edge providers.
19.  Para. 36: the scope of the market between broadband and end-users is limited. No evidence exists for “pay-for-priority” on end-users. However, broadband is doing that on the peering side.
20.  Para. 37: voluntary subsidization could be a useful mechanism, where the broadband still maintains a physically neutral network, while service differentiation is maintained at a higher economic layer. Product differentiation is not uncommon in other areas, e.g., first class airfare, student tickets, and etc. Our on-going work [6] analyzes the subsidization competition among the content providers when policy allows. Our preliminary results show that it could serve a (physically) neutral and viable mechanism to attract investment incentives for access ISPs. The danger is that new entrants in the edge provider market might find it difficult to compete with established players with deeper pockets.
21.  Para 40: The European ISPs have to unbundle their local loops, which creates competition among the providers. Thus, even when there is no policy to prohibit ISPs to block traffic, competition will drive them not to hurt user welfare, if such a blocking does so. This is supported by our public option work [1] and the developments in the UK market [7]. On the other hand, the lack of competition in the US (because of no local-loop unbundling) makes policy more important. The problem is not about whether there are cases of blocking, but whether competition is enough for users to choose providers in case one of them is not satisfactory to the users.
22.  Para 41: Again, the problem is related to the lack of competition. When AT&T is the only provider and it lacks capacity, there might be a good reason (network management, or economic) for it to restrict applications. It is difficult to judge whether such a restriction is good or bad. Without such a restriction, capacity-heavy application might affect the usage of other apps when the wireless capacity is limited. When competition is there, users will have choices to switch to another provider which might allow such apps, but probably with poorer quality. In a monopoly case, regulation needs to be carefully designed. If we want to guarantee a minimum service quality for apps, this threshold might depend on the capacity of the provider and other factors. Instead, we propose to limit the quality difference [5] among services rather than putting an absolute guaranteed minimum service quality for the free services.
23.  Para 42: we do not agree that “the threat of …. Did not depend on … market power”. In fact, that is the root cause. We have shown the impact of market power in our work [1, 3].
24.  Para 43: we do not agree that “the commission need not engage in a market power analysis to justify its rules …”.  “the ability to block edge providers depended on “end users not being fully responsive to the imposition of such restrictions””, we comment that transparency is needed to make users to be aware of such restrictions. The no-responsiveness could be because the restriction does not affect users’ welfare/utility much and switching to another provider is not necessary. The real threat is that it matters to the users, but due to the lack of market competition and user choices, the users have to stay with the current provider.
25.  Para. 44: “broadband providers have incentives and the economic ability to limit Internet openness …” we comment that broadband providers do not have a direct incentive to limit openness, unless it is going to increase their profit/revenue. In a competitive market, they won’t have such incentives if users are sensitive to service quality. Also, broadband providers do not have strong incentives to limit the openness on the end-user side, but have more incentive to differentiate at the peering side with the edge providers, who might have much larger profit margins compared to the access providers. The changing market place, including the change in content, e.g. the rise of Netflix and CDNs, has affected the broadband peering strategies greatly. Technology-wise, it is not difficult to physically separate different capacities along different routing paths. With the emerging Software Defined Networking (SDN) approaches, ISPs might have stronger abilities to manage their network resources to differentiate their peering relationships with other ISPs/CPs. Our work [1] showed that by differentiate capacity at the CP-side, the ISP has incentives to do so to increase revenue under a monopoly setting. In particular, the ISP has incentives to make free/open class “damaged good” to force CPs to use premium class and pay, which might hurt social welfare. We showed that competition, e.g., the introduction of Public option, could solve the problem. Justification for charging edge providers could be the basic economic efficiency: capacity is limited, by differentiate high- and low-value traffic, the ISP could provide higher welfare to users and CPs.
26.  Para. 45: Transit ISPs use parameters in BGP routing protocol to choose different routing paths. Access ISPs use public/private peering settings to control traffic. On the user-side, mobile providers use data cap to control and price traffic data. AT&T’s sponsored data plan introduces voluntary subsidization from CPs to end-users for their over-cap traffic. As for the Comcast case, its private peering practice might be used to exploit their monopoly market power in the access market. Level3 data hints at that.
27.  Para. 46: we do not have data to understand the switching costs. However, we support that transparency to be given to users, so that they understand more of their choices/options and improve the market competition.
28.  Para. 47: “We seek comment on the state of competition in broadband Internet access service, and its effect on providers’ incentives to limit openness.” We guess many people will comment on the monopolistic status of Comcast. Our public option work [1] showed that the monopoly has incentives to differentiate service at the CP-side and show its incentives to make public peering “damaged good”, which is claimed by Level3. Thus, we advocate a public option approach to introduce competition, under which no regulation is needed to monopoly. Otherwise, suitable regulation might be needed to regulate the monopoly. DSL’s openness practice could incentivize high-speed broadband to differentiate CPs less aggressive, although another high-speed provider’s competition will help more.
29.  Para. 48: needs more market data for market competition analysis.
30.  Para. 49: as our study [1] showed, market competition is a key to whether regulation is needed. We suggest that regulation should be closely coupled with market power analysis.
31.  Para. 50: “there are other economic theories …” 1) Shapley value for inter-ISP settlement [3, 4], and 2) public option [1] for access ISP service differentiation and competition, and 3) subsidization competition [6] among the content providers. Also, on the value of the network, the exact form of n^2 or n log n doesn’t matter - the important theoretical distinction is whether the value of the network is a convex function of n or not  - and both n^2 and n log n satisfy convexity.
32.  Para. 51: technically, access ISP might be able to block end-user traffic; however, flow-based control is expensive and difficult for ISPs, and has little economic incentives to do so. ISPs largely use public/private peering nowadays to differentiate classes of traffic via inter-AS routing. The routing is currently based on BGP protocol, a rough tool to do fine grain flow-level control. Thus, we suggest the rules extend its scope to include inter-AS settlement and peering relationships.
33.  Para. 52: yes, ISPs could use traffic management tools. However, most practices are simply macro-level private/peering relationships, where physical links are separated/isolated and no active differentiations are needed for the traffic flows.
34.  Para. 59: “.. the rules were not intended “to affect existing arrangements for network interconnection, including existing paid peering arrangements””, we comment that the inter-connect agreement between the broadband and other transit ISPs should be included within the scope, otherwise, less could be done to limit the service differentiation done on the CP-side.
35.  Para. 60: in general, a “specialized service” could be a prioritized service which brings extra revenue for the ISP and its users. However, as we showed in our work [1], ISPs might have incentives to make ordinary service “damaged good” so as to force users to use specialized service and pay them. Thus, we might need to impose conditions under which specialized service could be provided.
36.  Para. 62: this is reasonable and useful to distinguish mobile and fixed service providers, and based on their characteristics, e.g., cost structure and capacity, to impose different regulations, e.g., different minimum requirement for service and thresholds for service quality differences.
37.  Para. 63-88: we general support the transparency in any aspect, as long as it’s practically feasible. The main reason is that transparency could improve market completion, under which strict regulations might not even be needed.
38.  Para. 89-109: blocking is definitely undesirable for users and CPs. However, the D.C. Circuit vacated the rule mainly because of the legal “common carriage” requirement, which the ISPs might not be responsible for. We do not oppose the proposed rule, however, it still does not have a legal justification and “a clarification … does not preclude …” makes the rule almost non-enforcing. To impose a minimum level of service is encouraging; however, the legal justification is still missing and what this level to set needs further thoughts.
39.  Para. 90: “relationship between no-blocking and commercially reasonable rules”. No-blocking is a special case of the later.
40.  Para. 101: “we seek comment on how minimum level of access should be defined”. This is important. Different type of ISPs should have different thresholds. Different type of CPs might have different minimum requirements, although from a regulatory and practical perspective, it is difficult to set different requirements for each different CP. Furthermore, under sufficient competition, we think even the minimum requirements are not needed.
41.  Para. 104: we agree that the requirement should also be evolving based on the changing characteristics of content and user expectations.
42.  Para. 110-141: “Codifying an enforceable rule that is not common carriage per se”.  D.C. rejected prior rule based on “(it) so limited broadband providers’ control over edge providers’ transmissions that [it] constituted common carriage per se.” from this perspective, I feel that a requirement of minimum service requirement is still like imposing a “common carriage” requirement on the ISPs, which does not have a legal justification for the FCC (although it might have its economic justifications). Therefore, in our work [5], we suggest a milder/weak, but enforceable rule, which restrict the difference in service qualities of the different services provided by the ISPs. If the ISP is really capacity constrained, then its premium service quality cannot be high, unless its ordinary service has to be maintained at certain level. The advantage is that it does not impose a fixed/hard requirement on the ISPs, and the rule is flexible for different types of providers, e.g., mobile provider might maintain lower QoS for ordinary (e.g., best-effort) and it premium service (e.g., 1Mbps), while high-speed broadband can provide very high-quality premium service (e.g., 10Mbps), when its ordinary service can guarantee already quite good service (e.g. 2Mbps). By imposing such a relative regulation, it does not impose any hard requirement for ISPs to fulfill. In comparison, minimum requirement rule read more like “you have to become a common carriage first, so as to be qualified to differentiate services”. Both have similarities and differences in terms of justifications and practicality.
43.  Para. 142-160: these are legal authority and considerations, not our expertise. We suppose the rule needs to avoid imposing “common carriage” kind of regulations to ISPs.
44.  Para. 161-176: this is dispute resolution. We comment that our proposed Shapley value mechanism can be used for inter-AS dispute resolution. However, it is more on the peering side, not on the end-user side.

References
[1] Richard T. B. Ma and Vishal Misra. The Public Option: A Non-Regulatory Alternative to Network Neutrality. IEEE/ACM Transactions on Networking, Volume 21, Issue 6, pp. 1866 - 1879, December, 2013.
[2] Jingjing Wang, Richard T. B. Ma and Dah Ming Chiu.  Paid Prioritization and Its Impact on Net Neutrality. Proceedings of IFIP Networking Conference 2014.
[3] Richard T. B. Ma, Dah Ming Chiu, John C. S. Lui, Vishal Misra and Dan Rubenstein. On Cooperative Settlement Between Content, Transit and Eyeball Internet Service Providers. IEEE/ACM Transactions on Networking, Volume 19, Issue 3, pp. 802 - 815, June, 2011.
[4] Richard T. B. Ma, Dah Ming Chiu, John C. S. Lui, Vishal Misra and Dan Rubenstein. Internet Economics: The Use of Shapley Value for ISP Settlement. IEEE/ACM Transactions on Networking, Volume 18, Issue 3, pp. 775 - 787, June, 2010.
[5] Jing Tang and Richard T. B. Ma. Regulating Monopolistic ISPs without Neutrality. Proceedings of IEEE International Conference on Networking Protocols (ICNP) 2014.
[6] Richard T. B. Ma. Subsidization Competition: Vitalizing the Neutral Internet. Working paper. http://arxiv.org/pdf/1406.2516v1.pdf.
[7] Thomas Newton. ISP Traffic Management: BT vs Virgin vs Sky vs TalkTalk vs EE. http://recombu.com/digital/news/isp-traffic-management-bt-sky-virgin-media-ee-talktalk_M11045.html


Wednesday, June 11, 2014

How much competition is enough?

The topic of competition is a very interesting one and with layers and layers of complexity. While I firmly believe that the path to an Open Internet goes through building competition at the broadband level, and not through complex network neutrality regulations, the issue is not simple.

On the one hand, we have the example of UK, where OfCom has kept a pretty much hands off approach to Network Neutrality, the result has been a very open Internet that is market and competition driven. Almost every consumer in the UK has access to 4 ISPs that are similar in terms of capabilities and the Internet has remained "neutral". On the other hand we have the example in the US, where although significant portions of the nation have at least two comparable broadband providers (say, metropolitan DC and New Jersey where Comcast and Verizon FiOS are widely available) the "openness of the Internet" has been a problem. Specifically there have been peering disputes between Netflix and broadband providers, or the CDNs employed by Netflix and broadband providers. Netflix and Level3 have been complaining loudly that this is coercion from the ISP and quality is being deliberately deteriorated to extract a toll from the content providers. So the question is - if 4 competitors provide Internet openness in the UK, why do 2 providers fail to do so in the US? What gives the broadband providers in the US the confidence to flex their muscles, whereas similar profit oriented entities in the UK don't?

The answer to that involves a lot of factors. First off is the question of long term contracts - in the UK they are prohibited as far as I know, whereas in the US they are commonplace. This makes "competition" less than perfect. Secondly, since the broadband providers in the US are also invariably cable providers, there are vertically integrated monopolies in the US that are absent in the UK. There are many factors that make an ISP sticky for a consumer.  However, the point that I wanted to bring across in this post is what happens when there is "true" competition  - is there a magic number of competitors (4 vs 2) that ensures openness and "good behavior" by ISPs? Isn't 2 providers competing the same as 4 providers competing?

The answer to that is no. Competition is monotonically better for consumers. 3 providers competing is better than 2, and 4 is better than 3. And this is with the assumption that there is no collusion etc. happening with a smaller number of competitors, i.e. the best case scenario for competition. In a prior work, we applied cooperative game theory techniques to analyze the Internet ecosystem:

Richard T.B. Ma, Dahming Chiu, John C.S. Lui, Vishal Misra and Dan Rubenstein, On Cooperative Settlement Between Content, Transit and Eyeball Internet Service Providers, Proceedings of 2008 ACM Conference on Emerging network experiment and technology (CoNEXT 2008), Madrid, Spain, December, 2008


The associated talk we gave is here, but let me try to explain some numbers around competition that can explain the behavior we have observed.


An important concept of cooperative game theory is the Shapley value, which is the share that an individual gets of the value generated by the coalition it is part of. Shapley value incorporates various factors like the value an individual brings to the coalition, the value of the coalition without that individual etc. and the end formula gives guidance on what a rational individual would do to maximize it's share under all scenarios. Our work has the following formula of the “fair share” (the Shapley value) of the class of providers in the economic ecosystem of the Internet:

Let’s say there are m content providers and n ISPs. Then the share of the value generated (V) for the content providers is n/(m*(n+m)) and that of the ISPs is m/(n*(n+m)).

Let’s say Netflix is the sole content provider (m=1) at one end and there are 2 ISPs competing for customers (n = 2). Then Netflix’s share is 2/3 and each ISP is 1/6. So for every say 12 dollars a customer generates for Netflix every month, the ISPs have reason to believe that they deserve 2 dollars of it, because Netflix's business wouldn't exist without the ISPs (I am deliberately leaving out arguments of broadband being a utility that every consumer has the right to etc.).

If now the number of ISPs competing becomes 3, the Netflix share becomes 3/4 and the ISP share is 1/12 each. If you move to 4 ISPs, then Netflix share becomes 4/5, and each ISP is 1/20. Now for the 12 dollars that Netflix generates, the ISPs believe they deserve 60 cents from it.

So moving from 2 to 4 the “rightful share” of the value generated reduces to 1/20th from 1/6th - it is plausible that it is not worth it at that point to play hardball and extract that revenue from Netflix and instead the ISPs are more interested in winning and keeping customers. Maybe at 1/6th (2 ISPs competing) it is worth it to lose a few customers if you end up extracting more from the content provider but increasing the level of competition reduces the utility of that tactic. At some point the expected payoff (of toll) from the content provider falls below the expected loss (of customer revenue) to competing ISPs and at that point "competition is enough".

So more competition is better for consumers, and the cooperative game theory analysis provides some numbers to reason about how much better.