Thursday, April 22, 2010

Wide Area Network


A wide area network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries [1]). This is in contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively.

WAN design options
WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.

Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation.

Performance improvements are sometimes delivered via WAFS or WAN optimization.

Dynamic Host Configuration Protocol


The Dynamic Host Configuration Protocol (DHCP) is a computer networking protocol used by hosts (DHCP clients) to retrieve IP address assignments and other configuration information.

DHCP uses a client-server architecture. The client sends a broadcast request for configuration information. The DHCP server receives the request and responds with configuration information from its configuration database.

In the absence of DHCP, all hosts on a network must be manually configured individually - a time-consuming and often error-prone undertaking.

Dynamic Host Configuration Protocol automates network-parameter assignment to network devices from one or more fault-tolerant DHCP servers. Even in small networks, DHCP is useful because it can make it easy to add new machines to the network.

When a DHCP-configured client (a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information from a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the DNS servers, other servers such as time servers, and so forth. On receiving a valid request, the server assigns the computer an IP address, a lease (length of time the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting, and must complete before the client can initiate IP-based communication with other hosts.

Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:

dynamic allocation: A network administrator assigns a range of IP addresses to DHCP, and each client computer on the LAN has its IP software configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed (dynamic re-use of IP addresses).
automatic allocation: The DHCP server permanently assigns a free IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had.
static allocation: The DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually filled in (perhaps by a network administrator). Only requesting clients with a MAC address listed in this table will be allocated an IP address. This feature (which is not supported by all devices) is variously called Static DHCP Assignment (by DD-WRT), fixed-address (by the dhcpd documentation), DHCP reservation or Static DHCP (by Cisco/Linksys), and IP reservation or MAC/IP binding (by various other router manufacturers).



IPv6


Internet Protocol version 6 (IPv6) is the next-generation Internet Protocol version designated as the successor to IPv4, the first implementation used in the Internet that is still in dominant use currently[update]. It is an Internet Layer protocol for packet-switched internetworks. The main driving force for the redesign of Internet Protocol is the foreseeable IPv4 address exhaustion. IPv6 was defined in December 1998 by the Internet Engineering Task Force (IETF) with the publication of an Internet standard specification, RFC 2460.

IPv6 has a vastly larger address space than IPv4. This results from the use of a 128-bit address, whereas IPv4 uses only 32 bits. The new address space thus supports 2128 (about 3.4×1038) addresses. This expansion provides flexibility in allocating addresses and routing traffic and eliminates the primary need for network address translation (NAT), which gained widespread deployment as an effort to alleviate IPv4 address exhaustion.

IPv6 also implements new features that simplify aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from Link Layer media addressing information (MAC address).

Network security is integrated into the design of the IPv6 architecture. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread optional deployment first in IPv4 (into which it was back-engineered). The IPv6 specifications mandate IPsec implementation as a fundamental interoperability requirement.

In December 2008, despite marking its 10th anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general worldwide deployment. A 2008 study[1] by Google Inc. indicated that penetration was still less than one percent of Internet-enabled hosts in any country. IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments.[2]

Intrusion detection system


An IDS is a device (or application) that monitors network and/or system activities for malicious activities or policy violations and produces reports to a Management Station. Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices.[1] Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents.[1] Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.[1] In addition, organizations use IDPSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security policies.[1] IDPSs have become a necessary addition to the security infrastructure of nearly every organization.[1]

IDPSs typically record information related to observed events, notify security administrators of important observed events, and produce reports.[1] Many IDPSs can also respond to a detected threat by attempting to prevent it from succeeding.[1] They use several response techniques, which involve the IDPS stopping the attack itself, changing the security environment (e.g., reconfiguring a firewall), or changing the attack’s content.[1]

There are two main types of IDS's: network-based and host-based IDS.

In a network-based intrusion-detection system (NIDS), the sensors are located at choke points in the network to be monitored, often in the demilitarized zone (DMZ) or at network borders. The sensor captures all network traffic and analyzes the content of individual packets for malicious traffic.

In a host-based system, the sensor usually consists of a software agent, which monitors all activity of the host on which it is installed, including file system, logs and the kernel. Some application-based IDS are also part of this category.

Network intrusion detection system (NIDS)
It is an independent platform that identifies intrusions by examining network traffic and monitors multiple hosts. Network Intrusion Detection Systems gain access to network traffic by connecting to a hub, network switch configured for port mirroring, or network tap. An example of a NIDS is Snort.
Host-based intrusion detection system (HIDS)
It consists of an agent on a host that identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability/acl databases) and other host activities and state. An example of a HIDS is OSSEC.

Intrusion detection systems can also be system-specific using custom tools and honeypots.

Telecommunications service provider (TSP)


A telecommunications service provider or TSP is a type of communications service provider that has traditionally provided telephone and similar services. This category includes ILECs, CLECs, and mobile wireless communication companies.

While some people use the terms "telecom service provider" and "communications service provider" interchangeably, the term TSP generally excludes Internet service providers (ISPs), cable companies, satellite TV, and managed service providers.

TSPs provide access to telephone and related communications services. In the past, most TSPs were government owned and operated in most countries, due to the nature of capital expenditure involved in it. But today there are many private players in most regions of the world, and even most of the government owned companies have been privatized.

The deregulation or privatization of telecom service providers first happened in the United States with the break up of the Bell System.

TFTP Server


Trivial File Transfer Protocol (TFTP) is a file transfer protocol, with the functionality of a very basic form of File Transfer Protocol (FTP); it was first defined in 1980.[1]

Due to its simple design, TFTP could be implemented using a very small amount of memory. It was therefore useful for booting computers such as routers which did not have any data storage devices. It is still used to transfer small amounts of data between hosts on a network, such as IP phone firmware or operating system images when a remote X Window System terminal or any other thin client boots from a network host or server. The initial stages of some network based installation systems (such as Solaris Jumpstart, Red Hat Kickstart, Symantec Ghost and Windows NT's Remote Installation Services) use TFTP to load a basic kernel that performs the actual installation.

Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer files. It has been implemented on top of the User Datagram Protocol (UDP) using port number 69. TFTP is designed to be small and easy to implement, therefore, lacks most of the features of a regular FTP. TFTP only reads and writes files (or mail) from/to a remote server. It cannot list directories, and currently has no provisions for user authentication.

In TFTP, any transfer begins with a request to read or write a file, which also serves to request a connection. If the server grants the request, the connection is opened and the file is sent in fixed length blocks of 512 bytes. Each data packet contains one block of data, and must be acknowledged by an acknowledgment packet before the next packet can be sent. A data packet of less than 512 bytes signals termination of a transfer. If a packet gets lost in the network, the intended recipient will timeout and may retransmit his last packet (which may be data or an acknowledgment), thus causing the sender of the lost packet to retransmit that lost packet. The sender has to keep just one packet on hand for retransmission, since the lock step acknowledgment guarantees that all older packets have been received. Notice that both machines involved in a transfer are considered senders and receivers. One sends data and receives acknowledgments, the other sends acknowledgments and receives data.

Three modes of transfer are currently supported by TFTP: netascii, that it is 8 bit ascii; octet (This replaces the "binary" mode of previous versions of this document.) raw 8 bit bytes; mail, netascii characters sent to a user rather than a file. Additional modes can be defined by pairs of cooperating hosts.

RIP


The Routing Information Protocol (RIP) is a dynamic routing protocol used in local and wide area networks. As such it is classified as an interior gateway protocol (IGP). It uses the distance-vector routing algorithm. It was first defined in RFC 1058 (1988). The protocol has since been extended several times, resulting in RIP Version 2 (RFC 2453). Both versions are still in use today, however, they are considered to have been made technically obsolete by more advanced techniques such as Open Shortest Path First (OSPF) and the OSI protocol IS-IS. RIP has also been adapted for use in IPv6 networks, a standard known as RIPng (RIP next generation), published in RFC 2080 (1997).

The routing algorithm used in RIP, the Bellman-Ford algorithm, was first deployed in a computer network in 1967, as the initial routing algorithm of the ARPANET.

The earliest version of the specific protocol that became RIP was the Gateway Information Protocol, part of the PARC Universal Packet internetworking protocol suite, developed at Xerox Parc. A later version, named the Routing Information Protocol, was part of Xerox Network Systems.

A version of RIP which supported the Internet Protocol (IP) was later included in the Berkeley Software Distribution (BSD) of the Unix operating system. It was known as the routed daemon. Various other vendors would create their own implementations of the routing protocol. Eventually, RFC 1058 unified the various implementations under a single standard.

Tuesday, April 13, 2010

Trend Micro


Trend Micro (TYO: 4704) is a global developer of software and services to protect against computer viruses, malware, spam, and Web-based threats. It is headquartered in Tokyo. Trend Micro was founded in 1988 in Los Angeles by Steve Chang, Jenny Chang and Eva Chen[2]. Steve Chang served as Trend Micro's CEO until 2004 when he was succeeded by co-founder Eva Chen, who had served as CTO since 1996.

In addition to its business products, Trend Micro is known for producing the PC-cillin family of desktop internet security software for consumers, and is the creator of the free online browser-based malware scanner, HouseCall. TrendLabs, Trend Micro's global security threat research, development and support organization, operates in 6 centers worldwide. Trend Micro has over 4000 employees in over 30 countries[3]. The company is on the US Federal Buying Schedule 70 through resellers Softchoice, ASAP Software, and Synnex.

Products
Consumer/Home and Home Office

Trend Micro Internet Security Pro
Trend Micro Internet Security
Trend Micro Antivirus + Antispyware
Small Business

Worry-Free Business Security
InterScan Messaging Hosted Security
Trend Micro SecureSite
Medium Business and Enterprise

OfficeScan Client Server Suite
InterScan Web Security
InterScan Messaging Security
Open Source

OSSEC
Popular Free Tools

HouseCall
CWShredder
Smart Surfing for the iPhone
HijackThis

There are many forms of this software. I can tell you that I recommend this program. One of the things I don't like about it is that it slows down your computer tremendously but that is one of the draw backs you have to deal with when it comes to keeping your pc protected from harmful infections.

Malwarebytes Anti-Malware


Malwarebytes' Anti-Malware (MBAM) is a computer application that finds and removes malware.[1] Made by Malwarebytes Corporation, it was released in January 2008. It is available in a free version, which scans for and removes malware when started manually, and a paid version, which provides scheduled scans and real-time protection.

Design
MBAM is intended to find malware that other anti-virus and spyware programs generally miss, including but not limited to rogue security software, adware and spyware.[2][3]

Variants
MBAM is available in both a free and a paid edition.[1] The free edition must be run manually, while the paid version can automatically perform scheduled scans. The paid version also adds real-time protection and IP blocking to prevent access to malicious web sites. [4]

Malwarebytes anti-malware is a program I use daily along with Trend Micro to help keep my pc healthy and free of infection. I would recommend this program to anyone.


Internet Security


When a computer connects to a network and begins communicating with other computers, it is essentially taking a risk. Internet security involves the protection of a computer's Internet account and files from intrusion of an unknown user.[1] Basic security measures involve protection by well selected passwords, change of file permissions and back up of computer's data.

Security concerns are in some ways peripheral to normal business working, but serve to highlight just how important it is that business users feel confident when using IT systems. Security will probably always be high on the IT agenda simply because cyber criminals know that a successful attack can be very profitable. This means they will always strive to find new ways to circumvent IT security, and users will consequently need to be continually vigilant. Whenever decisions need to be made about how to enhance a system, security will need to be held uppermost among its requirements.

Some apparently useful programs also contain features with hidden malicious intent. Such programs are known as Malware, Viruses, Trojans, Worms, Spyware and Bots.

Malware is the most general name for any malicious software designed for example to infiltrate, spy on or damage a computer or other programmable device or system of sufficient complexity, such as a home or office computer system, network, mobile phone, PDA, automated device or robot.

Viruses are programs which are able to replicate their structure or effect by integrating themselves or references to themselves, etc into existing files or structures on a penetrated computer. They usually also have a malicious or humorous payload designed to threaten or modify the actions or data of the host device or system without consent. For example by deleting, corrupting or otherwise hiding information from its owner.

Trojans (Trojan Horses) are programs which may pretend to do one thing, but in reality steal information, alter it or cause other problems on a such as a computer or programmable device / system.
Spyware includes programs that surreptitiously monitor keystrokes, or other activity on a computer system and report that information to others without consent.

Worms are programs which are able to replicate themselves over a (possibly extensive) computer network, and also perform malicious acts that may ultimately affect a whole society / economy.

Bots are programs that take over and use the resources of a computer system over a network without consent, and communicate those results to others who may control the Bots.

www.wikipedia.org

Monday, April 12, 2010

Routers


A router, pronounced /ˈraʊtər/ in the United States and Canada, and /ˈruːtər/ in the UK and Ireland (to differentiate it from the tool used to rout wood), is a purposely customized computer used to forward data among computer networks beyond directly connected devices. (The directly connected devices are said to be in a LAN, where data are forwarded using Network switches.)

More technically, a router is a networking device whose software and hardware [in combination] are customized to the tasks of routing and forwarding information. A router differs from an ordinary computer in that it needs special hardware, called interface cards, to connect to remote devices through either copper cables or Optical fiber cable. These interface cards are in fact small computers that are specialized to convert electric signals from one form to another, with embedded CPU or ASIC, or both. In the case of optical fiber, the interface cards (also called ports) convert between optical signals and electrical signals.

Routers connect two or more logical subnets, which do not share a common network address. The subnets in the router do not necessarily map one-to-one to the physical interfaces of the router.[1] The term "layer 3 switching" is used often interchangeably with the term "routing". The term switching is generally used to refer to data forwarding between two network devices that share a common network address. This is also called layer 2 switching or LAN switching.

Conceptually, a router operates in two operational planes (or sub-systems):[2]

Control plane: where a router builds a table (called routing table) as how a packet should be forwarded through which interface, by using either statically configured statements (called statical routes) or by exchanging information with other routers in the network through a dynamical routing protocol;
Forwarding plane: where the router actually forwards the traffic (or called packets in IP protocol) from ingress (incoming) interfaces to an egress (outgoing) interface that is appropriate for the destination address that the packet carries with it, by following rules derived from the routing table that has been built in the control plane.

The very first device that had fundamentally the same functionality as a router does today, i.e a packet switch, was the Interface Message Processor (IMP); IMPs were the devices that made up the ARPANET, the first packet switching network. The idea for a router (although they were called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, later that year it became a subcommittee of the International Federation for Information Processing. [6]

These devices were different from most previous packet switches in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts (although this particular idea had been previously pioneered in the CYCLADES network).

The idea was explored in more detail, with the intention to produce a real prototype system, as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture of today. [7] The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system, although due to corporate intellectual property concerns it received little attention outside Xerox until years later. [8]

The earliest Xerox routers came into operation sometime after early 1974. The first true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet. [9]

The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s. [10] [11] [12] [13]

As virtually all networking now uses IP at the network layer, multiprotocol routers are largely obsolete, although they were important in the early stages of the growth of computer networking, when several protocols other than TCP/IP were in widespread use. Routers that handle both IPv4 and IPv6 arguably are multiprotocol, but in a far less variable sense than a router that processed AppleTalk, DECnet, IP, and Xerox protocols.

In the original era of routing (from the mid-1970s through the 1980s), general-purpose mini-computers served as routers. Although general-purpose computers can perform routing, modern high-speed routers are highly specialized computers, generally with extra hardware added to accelerate both common routing functions such as packet forwarding and specialised functions such as IPsec encryption.

Still, there is substantial use of Linux and Unix machines, running open source routing code, for routing research and selected other applications. While Cisco's operating system was independently designed, other major router operating systems, such as those from Juniper Networks and Extreme Networks, are extensively modified but still have Unix ancestry.

http://www.wikipedia.org/

RAID


RAID, an acronym for redundant array of inexpensive disks or redundant array of independent disks, is a technology that allows high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. This concept was first defined by David A. Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987 as redundant array of inexpensive disks.[1] Marketers representing industry RAID manufacturers later reinvented the term to describe a redundant array of independent disks as a means of dissociating a low-cost expectation from RAID technology.[2]

RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple hard disk drives. The different schemes/architectures are named by the word RAID followed by a number, as in RAID 0, RAID 1, etc. RAID's various designs involve two key design goals: increase data reliability and/or increase input/output performance. When multiple physical disks are set up to use RAID technology, they are said to be in a RAID array[3]. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk. RAID can be set up to serve several different purposes.

RAID combines two or more physical hard disks into a single logical unit using special hardware or software. Hardware solutions are often designed to present themselves to the attached system as a single hard drive, so that the operating system would be unaware of the technical workings. For example, if one were to configure a hardware-based RAID-5 volume using three 250 GB hard drives (two drives for data, and one for parity), the operating system would be presented with a single 500 GB volume. Software solutions are typically implemented in the operating system and would present the RAID volume as a single drive to applications running within the operating system.

There are three key concepts in RAID: mirroring, the writing of identical data to more than one disk; striping, the splitting of data across more than one disk; and error correction, where redundant parity data is stored to allow problems to be detected and possibly repaired (known as fault tolerance). Different RAID schemes use one or more of these techniques, depending on the system requirements. The purpose of using RAID is to improve reliability and availability of data, ensuring that important data is not harmed in case of hardware failure, and/or to increase the speed of file input/output.

Each RAID scheme affects reliability and performance in different ways. Every additional disk included in an array increases the likelihood that one will fail, but by using error checking and/or mirroring, the array as a whole can be made more reliable by the ability to survive and recover from a failure. Basic mirroring can speed up the reading of data, as a system can read different data from multiple disks at the same time, but it may be slow for writing if the configuration requires that all disks must confirm that the data is correctly written. Striping, often used for increasing performance, writes each bit to a different disk, allowing the data to be reconstructed from multiple disks faster than a single disk could send the same data. Error checking typically will slow down performance as data needs to be read from multiple places and then compared. The design of any RAID scheme is often a compromise in one or more respects, and understanding the requirements of a system is important. Modern disk arrays typically provide the facility to select an appropriate RAID configuration.

RAID 0

RAID 0 (striped disks) distributes data across multiple disks in ways that gives improved speed at any given instant. If one disk fails, however, all of the data on the array will be lost, as there is neither parity nor mirroring. In this regard, RAID 0 is somewhat of a misnomer, in that RAID 0 is non-redundant. A RAID 0 array requires a minimum of two drives. A RAID 0 configuration can be applied to a single drive provided that the RAID controller is hardware and not software (i.e. OS-based arrays) and allows for such configuration. This allows a single drive to be added to a controller already containing another RAID configuration when the user does not wish to add the additional drive to the existing array. In this case, the controller would be set up as RAID only (as opposed to SCSI in non-RAID configuration), which requires that each individual drive be a part of some sort of RAID array.

RAID 1

RAID 1 mirrors the contents of the disks, making a form of 1:1 ratio realtime mirroring. The contents of each disk in the array are identical to that of every other disk in the array. A RAID 1 array requires a minimum of two drives.

RAID 3, RAID 4

RAID 3 or 4 (striped disks with dedicated parity) combines three or more disks in a way that protects data against loss of any one disk. Fault tolerance is achieved by adding an extra disk to the array, which is dedicated to storing parity information; the overall capacity of the array is reduced by one disk. A RAID 3 or 4 array requires a minimum of three drives: two to hold striped data, and a third for parity. With the minimum three drives needed for RAID 3, the storage efficiency is 66 percent. With six drives, the storage efficiency is 87 percent.

RAID 5

Striped set with distributed parity or interleave parity requiring 3 or more disks. Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.

RAID 6

RAID 6 (striped disks with dual parity) combines four or more disks in a way that protects data against loss of any two disks. For example, if the goal is to create 10x1TB of usable space in a RAID 6 configuration, we need two additional disks for the parity data.

RAID 10

RAID 1+0 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "1+0" name. A RAID 1+0 array requires a minimum of four drives – two mirrored drives to hold half of the striped data, plus another two mirrored for the other half of the data. In Linux, MD RAID 10 is a non-nested RAID type like RAID 1 that only requires a minimum of two drives and may give read performance on the level of RAID 0.

RAID 01

RAID 0+1 (or 01) is a striped data set (RAID 0) which is then mirrored (RAID 1). A RAID 0+1 array requires a minimum of four drives: two to hold the striped data, plus another two to mirror the first pair.

www.wikipedia.org

IP addresses


An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network that uses the Internet Protocol for communication between its nodes.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."[2]

The designers of TCP/IP defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last standardized by RFC 2460 in 1998.[4] Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).

The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.

As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.

The Internet Assigned Numbers Authority (IANA), which manages the IP address space allocations globally, cooperates with five Regional Internet Registries (RIRs) to allocate IP address blocks to Local Internet Registries (Internet service providers) and other entities.

Two versions of the Internet Protocol (IP) are in use: IP Version 4 and IP Version 6. (See IP version history for details.) Each version defines an IP address differently. Because of its prevalence, the generic term IP address typically still refers to the addresses defined by IPv4.


An illustration of an IP address (version 4), in both dot-decimal notation and binary.[edit] IP version 4 addresses
Main article: IPv4#Addressing
IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232) possible unique addresses. IPv4 reserves some addresses for special purposes such as private networks (~18 million addresses) or multicast addresses (~270 million addresses).

IPv4 addresses are usually represented in dot-decimal notation (four numbers, each ranging from 0 to 255, separated by dots, e.g. 208.77.188.166). Each part represents 8 bits of the address, and is therefore called an octet. In less common cases of technical writing, IPv4 addresses may be presented in hexadecimal, octal, or binary representations. In most representations each octet is converted individually.

[edit] IPv4 subnetting
In the early stages of development of the Internet Protocol,[1] network administrators interpreted an IP address in two parts, network number portion and host number portion. The highest order octet (most significant eight bits) in an address was designated the network number and the rest of the bits were called the rest field or host identifier and were used for host numbering within a network. This method soon proved inadequate as additional networks developed that were independent from the existing networks already designated by a network number. In 1981, the Internet addressing specification was revised with the introduction of classful network architecture.[2]

Classful network design allowed for a larger number of individual network assignments. The first three bits of the most significant octet of an IP address was defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes (B and C). The following table gives an overview of this now obsolete system.

Historical classful network architecture
Class First octet in binary Range of first octet Network ID Host ID Number of networks Number of addresses
A 0XXXXXXX 0 - 127 a b.c.d 27 = 127 224-2 = 16,777,214
B 10XXXXXX 128 - 191 a.b c.d 214 = 16,384 216-2 = 65,534
C 110XXXXX 192 - 223 a.b.c d 221 = 2,097,151 28-2 = 254

The articles 'subnetwork' and 'classful network' explain the details of this design.

Although classful network design was a successful developmental stage, it proved unscalable in the rapid expansion of the Internet and was abandoned when Classless Inter-Domain Routing (CIDR) was created for the allocation of IP address blocks and new rules of routing protocol packets using IPv4 addresses. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing on arbitrary-length prefixes.

Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions.

Tuesday, March 30, 2010

What is an ISP?


An Internet service provider (ISP), also sometimes referred to as an Internet access provider (IAP), is a company that offers its customers access to the Internet. The ISP connects to its customers using a data transmission technology appropriate for delivering Internet Protocol datagrams, such as dial-up, DSL, cable modem, wireless or dedicated high-speed interconnects.

ISPs may provide Internet e-mail accounts to users which allow them to communicate with one another by sending and receiving electronic messages through their ISP's servers. (As part of their e-mail service, ISPs usually offer the user an e-mail client software package, developed either internally or through an outside contract arrangement.[citation needed]) ISPs may provide services such as remotely storing data files on behalf of their customers, as well as other services unique to each particular ISP.

ISPs employ a range of technologies to enable consumers to connect to their network.

For users and small businesses, the most popular options include dial-up, DSL (typically Asymmetric Digital Subscriber Line, ADSL), broadband wireless, cable modem, fiber to the premises (FTTH), and Integrated Services Digital Network (ISDN) (typically basic rate interface).

For customers with more demanding requirements, such as medium-to-large businesses, or other ISPs, DSL (often SHDSL or ADSL), Ethernet, Metro Ethernet, Gigabit Ethernet, Frame Relay, ISDN (BRI or PRI), ATM, satellite Internet access and synchronous optical networking (SONET) are more likely to be used.

Typical home user connection

-Dial-up
-DSL
-Broadband wireless access
-Cable Internet
-FTTH
-ISDN
-Wi-Fi
-Typical business-type connection
-DSL
-SHDSL
-Ethernet technologies

Friday, March 26, 2010

Technology for disabled


Not many people know but at the start of the school year I took a job with a long time friend of mine who is disabled. When he was in the first grade he was diagnosed with Muscular Distrophy. I first met him when I was in 6th grade when the new middle school was built and all of the county elementary schools merged together for the 6th, 7th and 8th grade. Back then he was being wheeled around in a manual wheelchair and to get in and out of vehicles they had to pick him up and set him in the van. Througout the years his condition has progressed and has taken away more of his movement.


When I first met him I didn't know what to expect. I didn't know what was actually wrong with him. I didn't know if he was mentally retarded or if he broke his leg and couldn't walk for a while or what. I first talked to him and found out his specifics. I found that he was a really nice guy and he was very happy to just live his life. He put no fault on anyone that he had this disease and that was very selfless. It took a while for him to open up to me fully because him being in a wheelchair he was shy. Throughout middle school we became very good friends talking about sports and girls and stuff. There were times where I got to wheel him to class.


He got his first power wheelchair in 8th grade and we thought it was awesome. It was so expensive and we use to push the horn button on it and it would embarass him but it was all in fun. They ended up having a lot of problems with his first power wheelchair and in the 10th grade he got his 2nd power wheelchair which he currently has now. In middle school he got his van that he has currently that has a lift in there that allows him to get in and out with the push of a button.


I think that it is great that technology has progressed so far that now they have vans that you don't even have to lower a lift or anything. If they had one of those vans he wouldn't have some of the trouble he has getting in and out. They also have vans that have driving control that have the brake and gas on the steering wheel. He doesn't have the arm strength to drive but if he did the technology is out there. I am all for the changing in technology not only to make work and pleasure of life easier but to make the life of one of my best friends easier and more fullfilling. This job has taught me a lot not only to value what I have but to how awesome it is to know how great of an attitude my friend has about everything and how important that is in life.

Tuesday, March 23, 2010

Military and Technology


Technology is changing all around us and that doesn't stop it from reaching the military. In fact the Military is one of the leader in technologies today. Just think if everyone around us is coming out with new and improved designs and were still using a 17th century musket I think we would be screwed.


One of the common example of the changing in technology when you compare sniper rifles. Back long ago the main sniper rifle was the Springfield. The Springfield could kill from a distance of 600 yards 3 to 400 effectively. Today the big bad sniper can shoot a lot longer. It can shoot over 2 miles and effective to a mile and a half. It broke a record for the longest shot kill.


Today the technology in weapons is so great that a team can get in do the mission finish the mission and get out and never have been heard because of the advancements in guns and silencers.


There is also something that we use which is a drone plane. The drone plane is unmaned and sent into dangerous parts maybe to lay down some bombs or in worse case a nucular missle. (Haven't had to do that for a while.) The drone is such a sleek design. It is amazing to me how a plane can fly without having anyone flying it in the plane. That is the definition of technology doing something that extrodinary and great.


I think that if they keep coming out with stuff then the United States will still be the strongest and most powerful nation that it is today.

Saturday, March 20, 2010

Technology Before Sleep


A lot of times we put technolgy before sleeping. Whether were watching the Television or playing a gaming system. A good question is why do we do this. I thought hard about this and I think the answer is that people just feel like they don't need sleep. People feel like they can stay up till 5 in the morning playing video games and doing other things. I think when it gets to this point you have a problem. I got to this point this summer where I had to be at work at 8 and I wasn't going to sleep until 5:30 in the morning. I finally had to admit I had a problem.


Sleep is essential for a person to be healthy. Once sleep is taken out of the equation then you know that you really have a problem. I think that people that put technology before sleeping they really need to break that bad habbit. I thought of this topic because I am up at 8 o'clock on a Saturday morning blogging when I could be sleeping. So this topic was as a result of me being awake on a Saturday morning doing school. I still think that if you game, watch tv, or do stuff with technology while you should be sleeping you might want to seek help.

Friday, March 19, 2010

The World of Gaming



I think it is fasinating how gaming has changed and/or evolved over the years. I was just thinking to myself how my last blog was about the changing of technology I felt I would go a little deeper in depth and talk about how gaming has changed.




I can remember when all we had to play on was the computer at my house. We never really had a so called "gaming system," but we had fun on the computer playing those games. Now we have all kinds of gaming systems. Sure at that time they had the original Playstation out and Sega had some system out but the first system I ever had was a bootleg Atari. It was fun to play but as I'm sure you know they were before my time so they were kind of outdated and the computer games were way better than that.




Then we had the Nintendo 64 and it was awesome. We played that thing so many hours and we had so much fun with it but there was no way we could play with our friends unless they came over to our house. Then I went to my friends house and he had the coveted original Xbox. They were able to obtain ranks by playing against people all over the world. I wanted one so bad. We finally got one at christmas time to play our favorite game Halo.




Then came the Xbox 360. The new game Halo 3 came out and we were unable to play it on the Xbox. So we saved and saved our money and me and my brother bought one. Only we could not play online because we had dial up because we live in the boonies. My dad finally broke down and got Verizon Internet so we could play online. Only bad thing about it was that it was 65 dollars a month and it had a 5gb allowance a month. So we would play one weekend with all of our friends up there and we would already be at 4.5gb usage. We recently got Ntelos DSL because they upgraded to the top of Iron Gate hill where we live but we only get 1.5 download and maybe 200 upload, but it has been worth it.




I know what your thinking what does this stuff have to do with the changing in gaming? Well we went from the Atari to the N64 to the Xbox to the Xbox 360. Changes from the original game of pong to now playing with friends and/or other people from countries around the world. It is cool to think about. Pretty soon there is going to be actual games where we can go inside and play them like that new movie Gamer. I am glad technology is still changing and I am sure the billions of gamers out there would agree with me.

Thursday, March 18, 2010

Technology Illiteracy


Do you ever think about how many people are illiterate when it comes to technology? I sure do. I see it more in older people they have trouble working with computers and cell phones especially. Just last week I talked to my grandpa who lives in Buchanan and he was telling me he is taking a class at some community college every Tuesday night to learn more about computers. They just got a new one so he decided to sign up for this class to learn something. It is equivelant to ITE 115 at Dabney. He told me that he finally found someone dumber than him about computers. He said that some guy picked the keyboard up and was moving it around and the teacher asked him what he was doing and he said that he was trying to get the pointer thing to click on the internet. So from that you understand that some people are illiterate towards technology.


I think that we as the younger generation are not so illiterate towards technology because we grew up with the changing in technology. I think that the older people have trouble because they grew up without it so they didn't need it when it came about. I still like that some old people are trying to keep up with technology.

Monday, February 15, 2010

Upgrading Networks


When your Network isn't up to par and isn't performing the way you would like it to it may be time for you or your business to upgrade. When you decide to upgrade you need to really plan out your method of madness.


Extensive planning should go into every network upgrade. You need to plan outlines from beginning to end. A good plan helps identify strength's, weaknesses, oppertunities, and threats. This is called a swat analysis. The plan should clearly define the tasks and the order in which tasks are completed. Some examples of good network upgrade planning include:


- Sports teams following game plans


- Builders following strict blueprints


- Ceremonies or meetings following agendas

Network upgrades are key for employees and managers to do. They need to find security, hardware, user, and any possible flaws to maintain there network's effectiveness. If they neglect to upgrade there can be great consiquences, such as hackers, loss of information, and greater things such as loss of important information that is necessary to the jobs of the company. Network upgrades are healthy for a business as a whole and it is important to see and find holes in the network.

Help Desk

I have learned a lot about the way a business works since this class has started. Were moving very rapidly through the book and one thing that caught my eye was how big of a role the help desk plays.

The help desk technicians provide solutions to customers' network problems. Help desk user support usually exists at three levels. Incident management is the basic procedure followed when a help desk technician initiates the standard problem-solving processes. Help desk operation relies on opening trouble tickets and logging information.

Customer service and interpersonal skills are important when handling difficult clients and incidents. It is very important as a help desk employee to keep your cool under all of the difficult situations. I know I would have a hard time keeping cool when someone is cussing me and yelling to get something fixed even though its not your fault they are just mad at the whole situation and so I think them taking it out on the help desk makes them feel like the right people are getting an earfull. For the help desk people to lose their cool they could lose clients and customers, they could get into trouble or even get fired. It is important for the help desk employees to work quickly and throughly. Some of the skills that are consistently used in successful help desk communications include:

- Preperation

- Courteous greeting

- Listening to the customer

- Adapting to the customer's temperment

- Correctly diagnosing a simple problem

- Logging the call

The help desk really plays a huge role in customer satisfaction and ultimately can make or break a business. The help desk possibly plays one of the most important roles in the company.




Here is a kind of funny picture about help desks.

Wednesday, February 3, 2010

Changing of technology



I pondered and thought for a while today about what I could blog about. I thought about doing another boring topic out of the book, even though I can spice anything up and make it awesome I thought I would do something a little bit different. I thought that today in this blog I would talk about my views of the changing of technology.

It seems like a long time ago to me but in the early days of technology they didn't know near what they know now. I came across some pictures of earlier hard drives and storage devices and some of those things were as big as a two story house. They got better over the years and got the size down to a half of a story, then down to size of a monitor. Now hard drives are about the size of two cell phones or the length of a brick but the width of a cell phone. I'm not to good with explaining the size but you get the general picture. Storage has gotten so good it is inexpensive and the storage on them are so good. You can get a 16gb flash drive for thirty bucks or so and use to pay 400 for a 256mb. Also they have external hard drives that are thicker than the regular hard drives and they have some out now for 100 dollars with a terrabyte storage. I'm not sure I could ever fill up a terrabyte of storage and it is so ideal to use. Cell phones are changing to. They use to be huge dinosaurs and now they have some that are the size of your finger.

Technology is vastly changing and who knows how far they will come when I have gotten to the age where my parents are. Maybe then I will be telling my kids how technology use to be and they will probably laugh at me and say that is crazy how bad it sucked when you were growing up. Until then I can only lay back and watch and remember the greatness of technology in this day.
As you can see in the picture above not to many people have cell phones that look like this anymore.

Monday, January 25, 2010

Roles and Responsibilities (ISP)


Their are many teams and departments in an ISP organization. The teams and departments are to make sure that the network works correctly and operates smoothly. The teams and departments also need to make sure that the services that the ISP offer are readily available to be used and to make sure they stay that way for the most part.


Each department and team has its own role and responsibilities. Examples:


The network operations center (NOC) tests the new connection and makes sure it is working properly. The NOC notifies the help desk when the circuit is ready for operation. The help desk contacts the customer to guide the customer through the process of setting up passwords and other account info.


The onsite installation team is told of what circuits and equipment to use and when ready would install at the site of the customer.


Planning and provisioning determines whether the new customer has network hardware already or if new hardware needs to be installed.


Customer service receives the order from the customer and makes sure that the customer's requirements are accurately entered into the order-tracking database. Customer service plays a big role into the whole process. If you piss off the customer then the customer isn't going to want to business with you. Customer service for the most part has to be smooth, friendly and help the customer out in any way they can to make sure the customer feels comfortable with everything.


People use ISP's everyday but I'm sure they don't think about all of the time and effort that is put in to make these things work for them. I know I sure didn't. That's why I think it's so facinating that they put all of this stuff together and we don't even know the half of what goes into it.


In closing ISP's aren't the simplest things in the world and should be treated with a little bit of leaniency. There are so many Roles and Responsibilities that go into them they have room for error and we as the consumer should be leanient towards them.

Monday, January 18, 2010

Internet Service Providers (ISP)

When I started reading on ISP's I really didn't know what exactly their full functionality was and how someday this could help me. I found that they are used in businesses everyday. Businesses have been using the Internet for different reasons involving their companies for many years. They adapt to the changing internet everyday. They use the Internet for activities such as e-commerce, communication, training, and collaberation. A lot of bussinesses rely on the Internet because without the Internet they wouldn't exist.

I like looking at the business perspective of things not only because I work for the one my dad started for himself but I just like the whole big business atmosphere and to really take a look at all of the things they do and to learn that bussinesses use ISP's was just really cool to me and for some of them like I said earlier if they didn't have the internet they would go out of business.

Businesses connect to the Internet through the services of an ISP. ISP's not only provide connection services but also offer a range of support services. Equipment co-location, web hosting, FTP hosting, Application and Media Hosting, Voice over IP, and Tech Support are among the support services ISP's provide.

In closing ISP's have really started out small and have evolved into a huge thing that have changed a lot of businesses and lives. For some more information click the link below.

en.wikipedia.org/wiki/Internet_service_provider




ISP's connect the world together.

Wednesday, January 13, 2010

Testing Blogger

This is the first day of class. I am testing this blogger site for ITN 155 the second part of Cisco. Blogging is done all around the world everyday and I am excited to give it a try.