Content area
Full Text
In 1969, the U.S. Department of Defense created ARPANET, the precursor to today's internet. Around the same time, the SWIFT protocol used for money transfers was also established. These are both early examples of distributed systems: a collection of independent computers that appear to users as a single coherent system.
Many come to know they have a distributed system when the crash of a computer they've never heard of affects the whole system. This is often the result of assumptions architects and designers of distribution systems are likely to make.
In 1994, Peter Deutsch, who worked at Sun Microsystems, wrote about these assumptions to explore what can go wrong in distributed systems. In 1997, James Gosling added to this list to create what is commonly known as the eight fallacies of distributed computing. Traditional approaches, which use time-based replication to architect and build distributed systems, suffer from many of these fallacies and result in systems that are inefficient, insecure and costly to maintain. Modern approaches, using complex mathematics such as the Paxos algorithm, overcome many of these significant hurdles.
1. The network is reliable
The first fallacy is an easy way to set yourself up for failure, as Murphy made sure there will always be things that go wrong with the network--whether it is power failure or a cut cable. However, Active Transactional Data Replication ensures that should a single server or an entire data center go offline, the information you need will still be available, as each data note is continuously synchronised without geographical constraints.