One of the biggest things computer scientists have to deal with is backward compatibility. For example, suppose a company has created a program that connects to their remote server and performs some operation. If the company decides to update their server, and radically change the way it operates, they can't just write completely new code. If they did, then all the clients who were using their old software could no longer interact with the new server. So the company has to keep this in mind and make sure that the new server can also accept the same commands that the old server did.
This is one of the reasons that the Internet is rather messed up. The internet's precursor was very lax about things like security and authentication:
The internet's early architects built the system on the principle of trust. Researchers largely knew one another, so they kept the shared network open and flexible — qualities that proved key to its rapid growth.
But spammers and hackers arrived as the network expanded and could roam freely because the internet doesn't have built-in mechanisms for knowing with certainty who sent what.
The network's designers also assumed that computers are in fixed locations and always connected. That's no longer the case with the proliferation of laptops, personal digital assistants and other mobile devices, all hopping from one wireless access point to another, losing their signals here and there.
Engineers tacked on improvements to support mobility and improved security, but researchers say all that adds complexity, reduces performance and, in the case of security, amounts at most to bandages in a high-stakes game of cat and mouse.
The internet, at its core, is not a very secure thing. Protocols have been added on top of it to try to add security, but its all very ad hoc. So these people are proposing replacing the current internet with something that is secure at its very core:
Although it has already taken nearly four decades to get this far in building the internet, some university researchers with the US federal government's blessing want to scrap all that and start over.
The idea may seem unthinkable, even absurd, but many believe a "clean slate" approach is the only way to truly address security, mobility and other challenges that have cropped up since UCLA professor Leonard Kleinrock helped supervise the first exchange of meaningless test data between two machines on September 2, 1969.
They may be right, but it'll be almost impossible to actually do it.
A new network could run parallel with the current internet and eventually replace it, or perhaps aspects of the research could go into a major overhaul of the existing architecture.
These clean-slate efforts are still in their early stages, though, and aren't expected to bear fruit for another 10 or 15 years — assuming Congress comes through with funding.
Guru Parulkar, who will become executive director of Stanford's initiative after heading NSF's clean-slate programs, estimated that GENI alone could cost $350 million, while government, university and industry spending on the individual projects could collectively reach $300 million. Spending so far has been in the tens of millions of dollars.
And it could take billions of dollars to replace all the software and hardware deep in the legacy systems.
It would be an interesting development, though....