There are a couple of points of disagreements between consumers and Internet Service Providers, and some of these issues are easily resolved, while others become contentious due to their interrelation to the other issues. The big three are:
1. Bandwidth metering - lack of available consumer tools to measure bandwidth used (even while bandwidth is being measured by the ISP, this information is not available to the end-user.) Metering is an important aspect for any utility. Just look at water, electric and natural gas which are also piped in to our homes. Connectivity should be treated as a commodity, just as these physical commodities are metered, so should our digital service.
2. Bandwidth caps - As the world has moved from point-to-point communications paths to packet-switched communications, the congestion caused by overselling of downstream connectivity causes a battle between the consumer and the service provider. This is similar to infrastructure engineering in other industries, such as water and electrical companies, where the available resources are not there to provide 100% service to all customers at the same time. If every person in New York City decided to flush their toilets at exactly the same time, the load on the city's water and sewage infrastructures could cause some serious issues. The same problem exists in the digital world. The pipes that come into our homes today are capable of incredible speeds, but the expectation of usage of that digital pipe is a fraction of its capacity during normal usage. The up-channel piping also isn't big enough to carry the load.
However, there is an important difference when it comes to Internet service. In a physical commodity space, the physical infrastructure will break if it overflows. Pipes can burst, mountings can come undone, electrical conduits can heat up or blow up and gas pressure can cause serious damage to the system. On the Internet, the excess traffic can merely be dumped on the floor. That's something you just can't do with raw sewage, but in a digital world, the network traffic will disappear. In fact, the Internet infrastructure is DESIGNED with this flexibility in mind. From the routers (and QOS settings in the packets) to the protocols themselves (TCP retries when traffic gets lost, ICMP messages, and so on), the Internet is designed for overflow and failure. The service providers have the capability and should have the know how to be able to control bandwidth in such a way that everyone gets their fair share of it, even when the overflow is occurring.
Example: Clients A and B both have a 10Mb/s pipe and ISP has a 15Mb/s uplink. Client A uses his connection all of the time, while Client B only uses it on occasion. The ISP has the capability to begin marking Client A's traffic once he has reached some set limit of traffic, perhaps 100GB for the month. Now, Client A and Client B both attempt to transfer a movie at the end of the month. Because Client A's traffic is marked, when the uplink needs to determine whose traffic has a priority, Client B's traffic has a better chance of getting through. The pipe still services both Clients, but Client A finds more of his traffic being dumped on the ground due to his usage for the month having marked him as a hog. At full capacity, Client A is still going to receive 5Mb/s service. The speed downgrade will only be as long as Client B is using his full pipe.
In summary, bandwidth caps are a tool, and should not be absolutes for customers. ISPs don't have to pay extra for each bit they transmit to the central routers. They pay for certain uplink speeds, whether they fill their pipes or not. The tools exist to ensure that each client gets a fair shot at that bandwidth, and with competition, the free market should ensure that they don't undersubscribe too greatly. Bandwidth caps to consumers should be expressed as the point where their service may be downgraded as necessary. Users should be taught to understand the limitations of the Internet and uplink oversubscription and this is a good tool for that education. Coupled with bandwidth metering, they can monitor the level of service they receive and understand why it gracefully downgrades with overuse.
3. Net Neutrality - Inexorably linked to issue #2 is the result of engineering choices when it comes to managing the oversale of network bandwidth. The ISPs have, in the past, decided that because the consumers who are most likely to use the most resources are using them for nefarious purposes that they can control oversold bandwidth by cutting out anything they think is nefarious. Unfortunately, this captures the innocent in their nets. As an example, P2P mechanisms are used by pirates to transfer and share files on the Internet. Those same protocols are in use by game companies to distribute patches to their millions of users. It is near impossible for the ISPs to constantly be aware of which P2P connections are being used for lawful vs. unlawful purpose. In attempts to control the unlawful behavior, it is too easy for them to cut off harmless transmissions and degrade the very service they are trying to enhance.
Making decisions on what traffic to pass and what traffic not to pass gets the ISP into the liability game. Should they put a restriction on traffic you're using to drive a heart monitor, for example, when it looks like you're hacking into a medical center, would be a potentially disastrous action. ISP's should be able to avoid liability altogether by ignoring the flavor of the traffic that they provide. Each and every packet delivered from and to their customers should be treated equally, provided the user is behaving in agreement with their Terms of Service.
There is one exception to this filtering that I believe to be important. IP Protocols have a source and destination address. I believe there to be both a need and a responsibility to the community that source addresses be verified as coming from a subscribed connection (i.e. If I am an IP provider, I should be checking to ensure that the source address of IP traffic coming from your connection is actually advertising as being sourced by the address I have assigned to you). I would love to hear argument or commentary that argues against this, but IMHO ingress filtering is a long overdue and necessary component to keeping the Internet safe from the 'bad guys'.