Thursday, April 22, 2010

Wide Area Network


A wide area network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries [1]). This is in contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively.

WAN design options
WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.

Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation.

Performance improvements are sometimes delivered via WAFS or WAN optimization.

Dynamic Host Configuration Protocol


The Dynamic Host Configuration Protocol (DHCP) is a computer networking protocol used by hosts (DHCP clients) to retrieve IP address assignments and other configuration information.

DHCP uses a client-server architecture. The client sends a broadcast request for configuration information. The DHCP server receives the request and responds with configuration information from its configuration database.

In the absence of DHCP, all hosts on a network must be manually configured individually - a time-consuming and often error-prone undertaking.

Dynamic Host Configuration Protocol automates network-parameter assignment to network devices from one or more fault-tolerant DHCP servers. Even in small networks, DHCP is useful because it can make it easy to add new machines to the network.

When a DHCP-configured client (a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information from a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the DNS servers, other servers such as time servers, and so forth. On receiving a valid request, the server assigns the computer an IP address, a lease (length of time the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting, and must complete before the client can initiate IP-based communication with other hosts.

Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:

dynamic allocation: A network administrator assigns a range of IP addresses to DHCP, and each client computer on the LAN has its IP software configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed (dynamic re-use of IP addresses).
automatic allocation: The DHCP server permanently assigns a free IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had.
static allocation: The DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually filled in (perhaps by a network administrator). Only requesting clients with a MAC address listed in this table will be allocated an IP address. This feature (which is not supported by all devices) is variously called Static DHCP Assignment (by DD-WRT), fixed-address (by the dhcpd documentation), DHCP reservation or Static DHCP (by Cisco/Linksys), and IP reservation or MAC/IP binding (by various other router manufacturers).



IPv6


Internet Protocol version 6 (IPv6) is the next-generation Internet Protocol version designated as the successor to IPv4, the first implementation used in the Internet that is still in dominant use currently[update]. It is an Internet Layer protocol for packet-switched internetworks. The main driving force for the redesign of Internet Protocol is the foreseeable IPv4 address exhaustion. IPv6 was defined in December 1998 by the Internet Engineering Task Force (IETF) with the publication of an Internet standard specification, RFC 2460.

IPv6 has a vastly larger address space than IPv4. This results from the use of a 128-bit address, whereas IPv4 uses only 32 bits. The new address space thus supports 2128 (about 3.4×1038) addresses. This expansion provides flexibility in allocating addresses and routing traffic and eliminates the primary need for network address translation (NAT), which gained widespread deployment as an effort to alleviate IPv4 address exhaustion.

IPv6 also implements new features that simplify aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from Link Layer media addressing information (MAC address).

Network security is integrated into the design of the IPv6 architecture. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread optional deployment first in IPv4 (into which it was back-engineered). The IPv6 specifications mandate IPsec implementation as a fundamental interoperability requirement.

In December 2008, despite marking its 10th anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general worldwide deployment. A 2008 study[1] by Google Inc. indicated that penetration was still less than one percent of Internet-enabled hosts in any country. IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments.[2]

Intrusion detection system


An IDS is a device (or application) that monitors network and/or system activities for malicious activities or policy violations and produces reports to a Management Station. Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices.[1] Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents.[1] Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.[1] In addition, organizations use IDPSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security policies.[1] IDPSs have become a necessary addition to the security infrastructure of nearly every organization.[1]

IDPSs typically record information related to observed events, notify security administrators of important observed events, and produce reports.[1] Many IDPSs can also respond to a detected threat by attempting to prevent it from succeeding.[1] They use several response techniques, which involve the IDPS stopping the attack itself, changing the security environment (e.g., reconfiguring a firewall), or changing the attack’s content.[1]

There are two main types of IDS's: network-based and host-based IDS.

In a network-based intrusion-detection system (NIDS), the sensors are located at choke points in the network to be monitored, often in the demilitarized zone (DMZ) or at network borders. The sensor captures all network traffic and analyzes the content of individual packets for malicious traffic.

In a host-based system, the sensor usually consists of a software agent, which monitors all activity of the host on which it is installed, including file system, logs and the kernel. Some application-based IDS are also part of this category.

Network intrusion detection system (NIDS)
It is an independent platform that identifies intrusions by examining network traffic and monitors multiple hosts. Network Intrusion Detection Systems gain access to network traffic by connecting to a hub, network switch configured for port mirroring, or network tap. An example of a NIDS is Snort.
Host-based intrusion detection system (HIDS)
It consists of an agent on a host that identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability/acl databases) and other host activities and state. An example of a HIDS is OSSEC.

Intrusion detection systems can also be system-specific using custom tools and honeypots.

Telecommunications service provider (TSP)


A telecommunications service provider or TSP is a type of communications service provider that has traditionally provided telephone and similar services. This category includes ILECs, CLECs, and mobile wireless communication companies.

While some people use the terms "telecom service provider" and "communications service provider" interchangeably, the term TSP generally excludes Internet service providers (ISPs), cable companies, satellite TV, and managed service providers.

TSPs provide access to telephone and related communications services. In the past, most TSPs were government owned and operated in most countries, due to the nature of capital expenditure involved in it. But today there are many private players in most regions of the world, and even most of the government owned companies have been privatized.

The deregulation or privatization of telecom service providers first happened in the United States with the break up of the Bell System.

TFTP Server


Trivial File Transfer Protocol (TFTP) is a file transfer protocol, with the functionality of a very basic form of File Transfer Protocol (FTP); it was first defined in 1980.[1]

Due to its simple design, TFTP could be implemented using a very small amount of memory. It was therefore useful for booting computers such as routers which did not have any data storage devices. It is still used to transfer small amounts of data between hosts on a network, such as IP phone firmware or operating system images when a remote X Window System terminal or any other thin client boots from a network host or server. The initial stages of some network based installation systems (such as Solaris Jumpstart, Red Hat Kickstart, Symantec Ghost and Windows NT's Remote Installation Services) use TFTP to load a basic kernel that performs the actual installation.

Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer files. It has been implemented on top of the User Datagram Protocol (UDP) using port number 69. TFTP is designed to be small and easy to implement, therefore, lacks most of the features of a regular FTP. TFTP only reads and writes files (or mail) from/to a remote server. It cannot list directories, and currently has no provisions for user authentication.

In TFTP, any transfer begins with a request to read or write a file, which also serves to request a connection. If the server grants the request, the connection is opened and the file is sent in fixed length blocks of 512 bytes. Each data packet contains one block of data, and must be acknowledged by an acknowledgment packet before the next packet can be sent. A data packet of less than 512 bytes signals termination of a transfer. If a packet gets lost in the network, the intended recipient will timeout and may retransmit his last packet (which may be data or an acknowledgment), thus causing the sender of the lost packet to retransmit that lost packet. The sender has to keep just one packet on hand for retransmission, since the lock step acknowledgment guarantees that all older packets have been received. Notice that both machines involved in a transfer are considered senders and receivers. One sends data and receives acknowledgments, the other sends acknowledgments and receives data.

Three modes of transfer are currently supported by TFTP: netascii, that it is 8 bit ascii; octet (This replaces the "binary" mode of previous versions of this document.) raw 8 bit bytes; mail, netascii characters sent to a user rather than a file. Additional modes can be defined by pairs of cooperating hosts.

RIP


The Routing Information Protocol (RIP) is a dynamic routing protocol used in local and wide area networks. As such it is classified as an interior gateway protocol (IGP). It uses the distance-vector routing algorithm. It was first defined in RFC 1058 (1988). The protocol has since been extended several times, resulting in RIP Version 2 (RFC 2453). Both versions are still in use today, however, they are considered to have been made technically obsolete by more advanced techniques such as Open Shortest Path First (OSPF) and the OSI protocol IS-IS. RIP has also been adapted for use in IPv6 networks, a standard known as RIPng (RIP next generation), published in RFC 2080 (1997).

The routing algorithm used in RIP, the Bellman-Ford algorithm, was first deployed in a computer network in 1967, as the initial routing algorithm of the ARPANET.

The earliest version of the specific protocol that became RIP was the Gateway Information Protocol, part of the PARC Universal Packet internetworking protocol suite, developed at Xerox Parc. A later version, named the Routing Information Protocol, was part of Xerox Network Systems.

A version of RIP which supported the Internet Protocol (IP) was later included in the Berkeley Software Distribution (BSD) of the Unix operating system. It was known as the routed daemon. Various other vendors would create their own implementations of the routing protocol. Eventually, RFC 1058 unified the various implementations under a single standard.

Tuesday, April 13, 2010

Trend Micro


Trend Micro (TYO: 4704) is a global developer of software and services to protect against computer viruses, malware, spam, and Web-based threats. It is headquartered in Tokyo. Trend Micro was founded in 1988 in Los Angeles by Steve Chang, Jenny Chang and Eva Chen[2]. Steve Chang served as Trend Micro's CEO until 2004 when he was succeeded by co-founder Eva Chen, who had served as CTO since 1996.

In addition to its business products, Trend Micro is known for producing the PC-cillin family of desktop internet security software for consumers, and is the creator of the free online browser-based malware scanner, HouseCall. TrendLabs, Trend Micro's global security threat research, development and support organization, operates in 6 centers worldwide. Trend Micro has over 4000 employees in over 30 countries[3]. The company is on the US Federal Buying Schedule 70 through resellers Softchoice, ASAP Software, and Synnex.

Products
Consumer/Home and Home Office

Trend Micro Internet Security Pro
Trend Micro Internet Security
Trend Micro Antivirus + Antispyware
Small Business

Worry-Free Business Security
InterScan Messaging Hosted Security
Trend Micro SecureSite
Medium Business and Enterprise

OfficeScan Client Server Suite
InterScan Web Security
InterScan Messaging Security
Open Source

OSSEC
Popular Free Tools

HouseCall
CWShredder
Smart Surfing for the iPhone
HijackThis

There are many forms of this software. I can tell you that I recommend this program. One of the things I don't like about it is that it slows down your computer tremendously but that is one of the draw backs you have to deal with when it comes to keeping your pc protected from harmful infections.

Malwarebytes Anti-Malware


Malwarebytes' Anti-Malware (MBAM) is a computer application that finds and removes malware.[1] Made by Malwarebytes Corporation, it was released in January 2008. It is available in a free version, which scans for and removes malware when started manually, and a paid version, which provides scheduled scans and real-time protection.

Design
MBAM is intended to find malware that other anti-virus and spyware programs generally miss, including but not limited to rogue security software, adware and spyware.[2][3]

Variants
MBAM is available in both a free and a paid edition.[1] The free edition must be run manually, while the paid version can automatically perform scheduled scans. The paid version also adds real-time protection and IP blocking to prevent access to malicious web sites. [4]

Malwarebytes anti-malware is a program I use daily along with Trend Micro to help keep my pc healthy and free of infection. I would recommend this program to anyone.


Internet Security


When a computer connects to a network and begins communicating with other computers, it is essentially taking a risk. Internet security involves the protection of a computer's Internet account and files from intrusion of an unknown user.[1] Basic security measures involve protection by well selected passwords, change of file permissions and back up of computer's data.

Security concerns are in some ways peripheral to normal business working, but serve to highlight just how important it is that business users feel confident when using IT systems. Security will probably always be high on the IT agenda simply because cyber criminals know that a successful attack can be very profitable. This means they will always strive to find new ways to circumvent IT security, and users will consequently need to be continually vigilant. Whenever decisions need to be made about how to enhance a system, security will need to be held uppermost among its requirements.

Some apparently useful programs also contain features with hidden malicious intent. Such programs are known as Malware, Viruses, Trojans, Worms, Spyware and Bots.

Malware is the most general name for any malicious software designed for example to infiltrate, spy on or damage a computer or other programmable device or system of sufficient complexity, such as a home or office computer system, network, mobile phone, PDA, automated device or robot.

Viruses are programs which are able to replicate their structure or effect by integrating themselves or references to themselves, etc into existing files or structures on a penetrated computer. They usually also have a malicious or humorous payload designed to threaten or modify the actions or data of the host device or system without consent. For example by deleting, corrupting or otherwise hiding information from its owner.

Trojans (Trojan Horses) are programs which may pretend to do one thing, but in reality steal information, alter it or cause other problems on a such as a computer or programmable device / system.
Spyware includes programs that surreptitiously monitor keystrokes, or other activity on a computer system and report that information to others without consent.

Worms are programs which are able to replicate themselves over a (possibly extensive) computer network, and also perform malicious acts that may ultimately affect a whole society / economy.

Bots are programs that take over and use the resources of a computer system over a network without consent, and communicate those results to others who may control the Bots.

www.wikipedia.org

Monday, April 12, 2010

Routers


A router, pronounced /ˈraʊtər/ in the United States and Canada, and /ˈruːtər/ in the UK and Ireland (to differentiate it from the tool used to rout wood), is a purposely customized computer used to forward data among computer networks beyond directly connected devices. (The directly connected devices are said to be in a LAN, where data are forwarded using Network switches.)

More technically, a router is a networking device whose software and hardware [in combination] are customized to the tasks of routing and forwarding information. A router differs from an ordinary computer in that it needs special hardware, called interface cards, to connect to remote devices through either copper cables or Optical fiber cable. These interface cards are in fact small computers that are specialized to convert electric signals from one form to another, with embedded CPU or ASIC, or both. In the case of optical fiber, the interface cards (also called ports) convert between optical signals and electrical signals.

Routers connect two or more logical subnets, which do not share a common network address. The subnets in the router do not necessarily map one-to-one to the physical interfaces of the router.[1] The term "layer 3 switching" is used often interchangeably with the term "routing". The term switching is generally used to refer to data forwarding between two network devices that share a common network address. This is also called layer 2 switching or LAN switching.

Conceptually, a router operates in two operational planes (or sub-systems):[2]

Control plane: where a router builds a table (called routing table) as how a packet should be forwarded through which interface, by using either statically configured statements (called statical routes) or by exchanging information with other routers in the network through a dynamical routing protocol;
Forwarding plane: where the router actually forwards the traffic (or called packets in IP protocol) from ingress (incoming) interfaces to an egress (outgoing) interface that is appropriate for the destination address that the packet carries with it, by following rules derived from the routing table that has been built in the control plane.

The very first device that had fundamentally the same functionality as a router does today, i.e a packet switch, was the Interface Message Processor (IMP); IMPs were the devices that made up the ARPANET, the first packet switching network. The idea for a router (although they were called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, later that year it became a subcommittee of the International Federation for Information Processing. [6]

These devices were different from most previous packet switches in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts (although this particular idea had been previously pioneered in the CYCLADES network).

The idea was explored in more detail, with the intention to produce a real prototype system, as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture of today. [7] The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system, although due to corporate intellectual property concerns it received little attention outside Xerox until years later. [8]

The earliest Xerox routers came into operation sometime after early 1974. The first true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet. [9]

The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s. [10] [11] [12] [13]

As virtually all networking now uses IP at the network layer, multiprotocol routers are largely obsolete, although they were important in the early stages of the growth of computer networking, when several protocols other than TCP/IP were in widespread use. Routers that handle both IPv4 and IPv6 arguably are multiprotocol, but in a far less variable sense than a router that processed AppleTalk, DECnet, IP, and Xerox protocols.

In the original era of routing (from the mid-1970s through the 1980s), general-purpose mini-computers served as routers. Although general-purpose computers can perform routing, modern high-speed routers are highly specialized computers, generally with extra hardware added to accelerate both common routing functions such as packet forwarding and specialised functions such as IPsec encryption.

Still, there is substantial use of Linux and Unix machines, running open source routing code, for routing research and selected other applications. While Cisco's operating system was independently designed, other major router operating systems, such as those from Juniper Networks and Extreme Networks, are extensively modified but still have Unix ancestry.

http://www.wikipedia.org/

RAID


RAID, an acronym for redundant array of inexpensive disks or redundant array of independent disks, is a technology that allows high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. This concept was first defined by David A. Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987 as redundant array of inexpensive disks.[1] Marketers representing industry RAID manufacturers later reinvented the term to describe a redundant array of independent disks as a means of dissociating a low-cost expectation from RAID technology.[2]

RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple hard disk drives. The different schemes/architectures are named by the word RAID followed by a number, as in RAID 0, RAID 1, etc. RAID's various designs involve two key design goals: increase data reliability and/or increase input/output performance. When multiple physical disks are set up to use RAID technology, they are said to be in a RAID array[3]. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk. RAID can be set up to serve several different purposes.

RAID combines two or more physical hard disks into a single logical unit using special hardware or software. Hardware solutions are often designed to present themselves to the attached system as a single hard drive, so that the operating system would be unaware of the technical workings. For example, if one were to configure a hardware-based RAID-5 volume using three 250 GB hard drives (two drives for data, and one for parity), the operating system would be presented with a single 500 GB volume. Software solutions are typically implemented in the operating system and would present the RAID volume as a single drive to applications running within the operating system.

There are three key concepts in RAID: mirroring, the writing of identical data to more than one disk; striping, the splitting of data across more than one disk; and error correction, where redundant parity data is stored to allow problems to be detected and possibly repaired (known as fault tolerance). Different RAID schemes use one or more of these techniques, depending on the system requirements. The purpose of using RAID is to improve reliability and availability of data, ensuring that important data is not harmed in case of hardware failure, and/or to increase the speed of file input/output.

Each RAID scheme affects reliability and performance in different ways. Every additional disk included in an array increases the likelihood that one will fail, but by using error checking and/or mirroring, the array as a whole can be made more reliable by the ability to survive and recover from a failure. Basic mirroring can speed up the reading of data, as a system can read different data from multiple disks at the same time, but it may be slow for writing if the configuration requires that all disks must confirm that the data is correctly written. Striping, often used for increasing performance, writes each bit to a different disk, allowing the data to be reconstructed from multiple disks faster than a single disk could send the same data. Error checking typically will slow down performance as data needs to be read from multiple places and then compared. The design of any RAID scheme is often a compromise in one or more respects, and understanding the requirements of a system is important. Modern disk arrays typically provide the facility to select an appropriate RAID configuration.

RAID 0

RAID 0 (striped disks) distributes data across multiple disks in ways that gives improved speed at any given instant. If one disk fails, however, all of the data on the array will be lost, as there is neither parity nor mirroring. In this regard, RAID 0 is somewhat of a misnomer, in that RAID 0 is non-redundant. A RAID 0 array requires a minimum of two drives. A RAID 0 configuration can be applied to a single drive provided that the RAID controller is hardware and not software (i.e. OS-based arrays) and allows for such configuration. This allows a single drive to be added to a controller already containing another RAID configuration when the user does not wish to add the additional drive to the existing array. In this case, the controller would be set up as RAID only (as opposed to SCSI in non-RAID configuration), which requires that each individual drive be a part of some sort of RAID array.

RAID 1

RAID 1 mirrors the contents of the disks, making a form of 1:1 ratio realtime mirroring. The contents of each disk in the array are identical to that of every other disk in the array. A RAID 1 array requires a minimum of two drives.

RAID 3, RAID 4

RAID 3 or 4 (striped disks with dedicated parity) combines three or more disks in a way that protects data against loss of any one disk. Fault tolerance is achieved by adding an extra disk to the array, which is dedicated to storing parity information; the overall capacity of the array is reduced by one disk. A RAID 3 or 4 array requires a minimum of three drives: two to hold striped data, and a third for parity. With the minimum three drives needed for RAID 3, the storage efficiency is 66 percent. With six drives, the storage efficiency is 87 percent.

RAID 5

Striped set with distributed parity or interleave parity requiring 3 or more disks. Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.

RAID 6

RAID 6 (striped disks with dual parity) combines four or more disks in a way that protects data against loss of any two disks. For example, if the goal is to create 10x1TB of usable space in a RAID 6 configuration, we need two additional disks for the parity data.

RAID 10

RAID 1+0 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "1+0" name. A RAID 1+0 array requires a minimum of four drives – two mirrored drives to hold half of the striped data, plus another two mirrored for the other half of the data. In Linux, MD RAID 10 is a non-nested RAID type like RAID 1 that only requires a minimum of two drives and may give read performance on the level of RAID 0.

RAID 01

RAID 0+1 (or 01) is a striped data set (RAID 0) which is then mirrored (RAID 1). A RAID 0+1 array requires a minimum of four drives: two to hold the striped data, plus another two to mirror the first pair.

www.wikipedia.org

IP addresses


An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network that uses the Internet Protocol for communication between its nodes.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."[2]

The designers of TCP/IP defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last standardized by RFC 2460 in 1998.[4] Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).

The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.

As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.

The Internet Assigned Numbers Authority (IANA), which manages the IP address space allocations globally, cooperates with five Regional Internet Registries (RIRs) to allocate IP address blocks to Local Internet Registries (Internet service providers) and other entities.

Two versions of the Internet Protocol (IP) are in use: IP Version 4 and IP Version 6. (See IP version history for details.) Each version defines an IP address differently. Because of its prevalence, the generic term IP address typically still refers to the addresses defined by IPv4.


An illustration of an IP address (version 4), in both dot-decimal notation and binary.[edit] IP version 4 addresses
Main article: IPv4#Addressing
IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232) possible unique addresses. IPv4 reserves some addresses for special purposes such as private networks (~18 million addresses) or multicast addresses (~270 million addresses).

IPv4 addresses are usually represented in dot-decimal notation (four numbers, each ranging from 0 to 255, separated by dots, e.g. 208.77.188.166). Each part represents 8 bits of the address, and is therefore called an octet. In less common cases of technical writing, IPv4 addresses may be presented in hexadecimal, octal, or binary representations. In most representations each octet is converted individually.

[edit] IPv4 subnetting
In the early stages of development of the Internet Protocol,[1] network administrators interpreted an IP address in two parts, network number portion and host number portion. The highest order octet (most significant eight bits) in an address was designated the network number and the rest of the bits were called the rest field or host identifier and were used for host numbering within a network. This method soon proved inadequate as additional networks developed that were independent from the existing networks already designated by a network number. In 1981, the Internet addressing specification was revised with the introduction of classful network architecture.[2]

Classful network design allowed for a larger number of individual network assignments. The first three bits of the most significant octet of an IP address was defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes (B and C). The following table gives an overview of this now obsolete system.

Historical classful network architecture
Class First octet in binary Range of first octet Network ID Host ID Number of networks Number of addresses
A 0XXXXXXX 0 - 127 a b.c.d 27 = 127 224-2 = 16,777,214
B 10XXXXXX 128 - 191 a.b c.d 214 = 16,384 216-2 = 65,534
C 110XXXXX 192 - 223 a.b.c d 221 = 2,097,151 28-2 = 254

The articles 'subnetwork' and 'classful network' explain the details of this design.

Although classful network design was a successful developmental stage, it proved unscalable in the rapid expansion of the Internet and was abandoned when Classless Inter-Domain Routing (CIDR) was created for the allocation of IP address blocks and new rules of routing protocol packets using IPv4 addresses. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing on arbitrary-length prefixes.

Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions.