A oft touted phrase for IPv6 is something to the effect of "an address for every grain of sand"[1]. I have a problem with this statement. It's one of those statements that is technically true but, in fact, untrue -- when used as the answer to the question which it is implied to be answering.
If IP addresses were simply assigned to devices and backbone routers were made aware of every single one it might be true. It's not. It's important to view IPv6 address space size in the right light because otherwise we can end up in some of the same troubles as the current IPv4 Internet. These troubles include not only overall available IP addresses but also routing of these IP addresses across network operator boundaries. After all, what's an IP address without global reach ability? :-)
The way that IP addressing works, there is a hierarchy. This hierarchy is used to group individual IP addresses into larger IP address blocks (known as "prefixes" and sometimes "subnets"). In the early days of IPv4 that was the Class A, B, and C system. While it was replaced with CIDR, the new system still maintained a hierarchy based on network size -- it was simply less rigid. This is still necessary in an IPv6 world.
The size of the protocol's address space -- and how it is broken up -- is of the utmost importance to routing. One of the greatest ironies of IPv4 address consumption is that multi-homing -- the connection to more than one upstream Internet provider for performance, cost, and reliability reasons -- requires an IP block of a particular size. Anything smaller than that accepted by the community (through rough consensus and subject to stragglers, mavericks, and router capacity improvements) and you can't multi-home.
In the IPv4 world this has resulted in waste of IP addresses -- which are never actually assigned to end-user devices -- so that someone can multi-home. It's also made it more difficult for smaller networks that want redundancy. Even if they end up with sufficient IP space, it is likely from one of their ISPs and not portable. If they were truly bigger (as in, if they actually were going to use all of those IP addresses) they'd be able to bypass their ISPs, getting IP space from one of the geographically appropriate pseudo-NGOs that allocate IP address space to larger IP address consumers.
Why all the fuss? Why not just allow anyone and everyone to inject any size block into the Internet routing tables? Because routers have finite resources. The larger the routing tables the more memory and CPU used for every packet pushed through the router. At some point a line is drawn where it is no longer generally accepted to be economically viable. This is where the generally accepted "smallest prefix we'll accept into our routing tables" policies come from. (generally the smallest acceptable block is an /24 in the present IPv4 world, approximately 254 assignable IP addresses for end-user devices).
One of the still active debates in IPv6 is how multi-homing will be performed in the long run. Will the current IPv4 model work? Or does the current model artificially restrict how many folks would actually multi-home if they could? Does the current system encourage too much address waste -- and is that even still a concern? How rapidly would the routing tables grow if a different approach were taken? How will we handle the additional resource burden of the continued co-existence of both IPv4 _and_ IPv6 routing tables for quite some time? etc
IP address portability is (indirectly) addressed in IPv6. That remains to be seen though. Under this model, smaller sites still won't necessarily have their own permanent globally routable IP address blocks. They'll have plenty of real global IP addresses assigned by their ISP now -- without any fuss -- but those IPs will still be controlled by their ISP (i.e. if they opt to change ISPs they will have to return 'em and get new ones from their new ISP). Switching IP address blocks is made (supposedly) easier though. The idea is that deeper auto-configuration is adopted with something akin to current DHCP on steroids used pretty much across the board along with very tight integration with DNS -- and somehow overcoming DNS caching.
I am not advocating against IPv6. On the contrary, for its successful widespread adoption I think that expectations must be set appropriately. And, any open for debate areas -- which don't have to hold back its adoption necessarily -- need to continue to be widely discussed. The more awareness the less that a new adopter is blindsided -- and thus the happier they'll be with the outcome after they proceed with their adoption efforts. And, more importantly, the faster that some more definite solutions / best practices can be better understood and disseminated.
As always, I welcome comments, including contesting any of my conclusions and assumptions above. Discussion and debate is how nearly all progress is made, whether it is with ones self or with others. :-)
[1] “One of the major advantages of the new Internet protocol (IPv6) is that it overcomes the growth problems of the Internet caused by the current limitations in the number of IP addresses needed for every computer or other device in order to access the Internet. The new protocol allows for a virtually unlimited number of (2^128) addresses – enough to assign an address to every grain of sand on all the world’s beaches.”
--“European Commission hosts inaugural event to celebrate the launch of the world's first all IPv6 research network,” Brussels, 14th January 2004