ConnNet

ConnNet was a packet switched data network operated by the Southern New England Telephone Company serving the U.S. state of Connecticut.

ConnNet was the nation's first local public packet switching network when it was launched on March 11, 1985. Users could access services such as Dow Jones News Retrieval, CompuServe, Dialcom, GEnie, Delphi, Eaasy Sabre, NewsNet, PeopleLink, the National Library of Medicine, and BIX. ConnNet could also be used to access other national and international packet networks, such as Tymnet and ACCUNET. Large companies also connected their mainframe computers to ConnNet allowing employees access to the mainframes from home. The network is no longer in operation.

Hardware

The X.25 network was based on hardware from Databit, Inc. consisting of three EDX-P Network Nodes that performed switching and were located in Hartford, New Haven and Stamford. Databit also supplied 23 ANP 2520 Advanced Network Processors each of which provided the system with a point of presence, a network control center and modems. Customers would order leased line connections into the network for host computers running at 4,800 to 56,000 bits per second (bit/s). Terminals would connect over a leased line from 1,200 to 9,600 bit/s synchronous, 300 to 2,400 bit/s asynchronous or using dial-up connections from 300 to 1,200 bit/s. The connection to Tymnet was established over an X.75 based 9,600 bit/s analog link from the ConnNet Hartford node to Tymnet's Bloomfield node.

Operations in Tymshare

Organization

In operation, Tymshare's Data Networks Division was responsible for the development and maintenance of the network and Tymnet was responsible for the administration, provisioning and monitoring of the network. Each company had their own software development staff and a line was drawn to separate what each group could do. Tymshare development engineers wrote all the code which ran in the network, and the Tymnet staff wrote code running on host computers connected to the network. It is for this reason, that many of the Tymnet projects ran on the Digital Equipment Corporation DECSystem-10 computers that Tymshare offered as timesharing hosts for their customers. Tymnet operations formed a strategic alliance with the Tymshare PDP-10 TYMCOM-X operating systems group to assist them in developing new network management tools.

Trouble Tracking

Origins

Trouble reports were initially tracked on pieces of paper. This was until a manager at Tymnet wrote a small FORTRAN IV program to maintain a list of problem reports and track their status in a System 1022 database (a hierarchical database system for TOPS-10 published by Software House[citation needed]). He called the program PAPER after the old manual way of managing trouble tickets. The program grew as features were added to handle customer information, call-back contact information, escalation procedures, and outage statistics.

Company-wide Use

Access to PAPER became critical as more and more functionality was added. It eventually was maintained on two dedicated PDP-10 computers, model KL-1090, accessible via the Tymnet Packet Network as Tymshare hosts 23 and 26. Each computer was the size of 5 refrigerators, and had a string of disks that looked like 18 washing machines. Their power supplies produced +5 volts at 200 amps (non-switching) making them expensive to operate.

Major upgrades

In 1996 the DEC PDP-10s that ran Tymnet's trouble-ticket system were replaced by PDP-10 clones from XKL, Inc. They were accessible via TCP/IP as ticket.tymnet.com and token.tymnet.com, by both TELNET and HTTP. A low-end workstation from Sun was used as a telnet gateway; it accepted logins from the Tymnet network via x.25 to IP translation done by a Cisco router forwarded to "ticket" and/or "token". The XKL TOAD-1 systems ran a modified TOPS-20. The application was ported to a newer version of the Fortran compiler, and still used the 1022 database.

Decommission

In mid to late 1998, Concert produced an inter-company trouble tracking system for use by both MCI and Concert. This was adopted and the TTS PAPER data necessary for ongoing tickets was re-entered on the new system. TTS was kept up for historical information until the end of the year. In January 1999, both XKL servers (ticket and token) were decommissioned. In late 2003 the hardware left onsite in San Jose was accidentally scrapped by the facilities manager during a scheduled cleanup.

Electronic Data Interchange

EDI

Tymshare EDI, MD Payment Systems Company, MCI EDI Department

Tymshare was one of the pioneers in the EDI field. Under McDonnell Douglas, the Payment Systems Company continued that legacy and maintained its own network monitoring and support group. They used Tandem computers connected to a high speed data link using Tymnet as the connection and translation medium. Tymshare developed a bi-sync modem interface (HSA), a translation module to translate between EBCDIC and ASCII (BBXS), and a highly customized x.25 module (XCOM) to interface with the Tandem computers.

Apparently, there was no TCP/IP equivalent service, so to continue use of this service after the shutdown of Tymnet, an ingenious solution was selected. A special version of Tymnet Engine node code which allows nodes and interfaces to communicate with one another and the rest of the network was created. Instead of relying on the supervisor to validate calls, a table of permitted connections was defined per customer to allow an incoming call to be made from the HSA interface to the BBXS interface to the XCOM interface and on to the Tandem computer. In effect, they created a Tymnet Island consisting of a single Tymnet node that accepted calls for a pre-determined list of clients. No supervisor needed.

These islands of Tymnet have not only outlived the parent company, Tymshare, and the operations company, Tymnet, but also the Tymnet Network itself. As of 2008, these Tymnet Island nodes are still running and doing their jobs.

Tymnet - History

Beginnings: Tymshare

Tymshare was founded in 1964 as a time sharing company, selling computer time and software packages for users. It had two SDS/XDS 940 computers; access was via direct dial-up to the computers. In 1968, it purchased Dial Data, another time-sharing service bureau.

In 1968, Ann & Norm Hardy, Bill Frantz, Joe Rinde, and LaRoy Tymes developed the idea of using remote sites with minicomputers to communicate with the mainframes. The minicomputers would serve as the network's nodes, running a program called a "Supervisor" to route data. In November 1971, the first Tymnet Supervisor program became operational. Written in assembly code by LaRoy Tymes for the SDS 940, with architectural design contributions from Norman Hardy, the "Supervisor" was the beginning of the Tymnet network. The Varian 620i was also used for the TYMNET nodes. During those first years, Tymshare and its direct customers were the network's only users.

It soon became apparent that the SDS 940 could not keep up with the rapid growth of the network. In 1972, Joseph Rinde joined the Tymnet group and began porting the Supervisor code to the 32-bit Interdata 7/32, as the 8/32 was not yet ready. In 1973, the 8/32 became available, but the performance was disappointing and a crash-effort was made to develop a machine that could run Rinde's Supervisor.

In 1974, a second, more efficient version of the Supervisor software became operational. The new Tymnet "Engine" software was used on both the Supervisor machines and on the nodes.

After the migration to Interdata, they started developing Tymnet on PDP-10. Tymshare sold the Tymnet network software to TRW, who created their own private network, TRWNET.

Tymes and Rinde then developed Tymnet II. Tymnet II ran in parallel with the original network, which continued to run on the Varian machines until it was phased out over a period of several years. Tymnet II's different method of constructing virtual circuits allowed for much better scalability.

Tymnet, Inc. spun off

In about 1979, Tymnet Inc. was spun off from Tymshare Inc. to continue administration and operation of the network. The network continued to grow, and customers who owned their own host computers and wanted access to them from remote sites became interested in connecting their computers to the network. This led to the foundation of Tymnet as a wholly owned subsidiary of Tymshare to run a public network as a common carrier within the United States. This allowed users to connect their host computers and terminals to the network, and use the computers from remote sites or sell time on their computers to other users of the network, with Tymnet charging them for the use of the network.

Tymnet

Tymnet was an international data communications network headquartered in San Jose, California that utilized virtual call packet switched technology and used X.25, SNA/SDLC, ASCII and BSC interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated async connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the United States and internationally via X.25/X.75 gateways.

As the Internet grew and became almost universally accessible in the late 1990s, the need for services such as Tymnet migrated to the Internet style connections, but still had some value in the third world and for specific legacy roles. However the value of these links continued to decrease, and Tymnet was officially shut down in 2004.

Network

Tymnet offered local dial-up modem access in most cities in the United States and to a limited degree in Canada, which preferred its own DATAPAC service.

Users would dial into Tymnet and then interact with a simple command-line interface to establish a connection with a remote system. Once connected, data was passed to and from the user as if connected directly to a modem on the distant system. For various technical reasons, the connection was not entirely "invisible", and sometimes required the user to enter arcane commands to make 8-bit clean connections work properly for file transfer.

Tymnet was extensively used by large companies to provide dial-up services for their employees who were "on the road", as well as a gateway for users to connect to large online services such as CompuServe or The Source.

Organization and functionality

In its original implementation, the network supervisor contained most of the routing intelligence in the network. Unlike the TCP/IP protocol underlying the internet, Tymnet used a circuit switching layout which allowed the supervisors to be aware of every possible end-point. In its original incarnation, the users connected to nodes built using Varian minicomputers, then entered commands that were passed to the supervisor which ran on a XDS 940 host.

Circuits were character oriented and the network was oriented towards interactive character-by-character full-duplex communications circuits. The nodes handled character translation between various character sets, which were numerous at that point in time. This did have the side effect of making data transfers quite difficult, as bytes from the file would be invisibly "translated" without specific intervention on the part of the user.

Tymnet later developed their own custom hardware, the Tymnet Engine, which contained both nodes and a supervisor running on one of those nodes. As the network grew, the supervisor was in danger of being overloaded by the sheer number of nodes in the network, since the requirements for controlling the network took a great part of the supervisor's capacity.

Tymnet II was developed in response to this challenge. Tymnet II was developed to ameliorate the problems outlined above by off-loading some of the work-load from the supervisor and providing greater flexibility in the network by putting more intelligence into the node code. A Tymnet II node would set up its own "permuter tables", eliminating the need for the supervisor to keep copies of them, and had greater flexibility in handling its inter-node links. Data transfers were also possible via "auxiliary circuits".

Accessing the Network

Asynchronous Access

Users could use modems on the Public Switched Telephone Network to dial TAC ports, calling either from "dumb" terminals or from computers emulating such terminals. Organizations with a large number of local terminals could install a TAC on their own site, which used a dedicated line, at up to 56 kbit/s, to connect to a switch at the nearest Telenet location. Dialup modems supported had a maximum speed of 1200 bit/s, and later 4800 bit/s.

Computer Access

Computers supporting the X.25 protocol could connect directly to switching centers. These connections ranged from 2.4 to 56 kbit/s.

Other Access Protocols

Telenet supported remote concentrators for IBM 3270 family intelligent terminals, which communicated, via X.25, to Telenet-written software that ran in IBM 370x series front-end processors.

PC Pursuit

In the late 1980s, Telenet offered a service called PC Pursuit. For a flat monthly fee, customers could dial into the Telenet network in one city, then dial out on the modems in another city to access bulletin board systems and other services. PC Pursuit was popular among computer hobbyists because it sidestepped long-distance charges. In this sense, PC Pursuit was a forerunner of Voice over IP services.

Cities accessible by PC Pursuit

City Code Area Code(s) City
AZPHO 602 Phoenix, Arizona
CAGLE 818 Glendale, California
CALAN 213 Los Angeles, California
CODEN 303 Denver, Colorado
CTHAR 203 Hartford, Connecticut
FLMIA 305 Miami, Florida
GAATL 404 Atlanta, Georgia
ILCHI 312, 815 Chicago, Illinois
MABOS 617 Boston, Massachusetts
MIDET 313 Detroit, Michigan
MNMIN 612 Minneapolis, Minnesota
NCRTP 919 Research Triangle Park, North Carolina
NJNEW 201 Newark, New Jersey
NYNYO 212, 718 New York City
OHCLV 216 Cleveland, Ohio
ORPOR 503 Portland, Oregon
PAPHI 215 Philadelphia, Pennsylvania
TXDAL 214, 817 Dallas, Texas
TXHOU 713 Houston, Texas
WIMIL 414 Milwaukee, Wisconsin

Telenet

Telenet was a packet switched network which went into service in 1974. It was the first publicly available commercial packet-switched network service.

The original founding company, Telenet Inc., was established by Larry Roberts (former head of the ARPANet), and Barry Wessler. GTE acquired Telenet in 1979. It was later acquired by Sprint and called "Sprintnet". Sprint migrated customers from Telenet to the modern-day SprintLink IP network, one of many networks composing today's Internet. Telenet had its first offices in downtown Washington DC, then moved to McLean, Virginia. It was acquired by GTE while in McLean, and then moved offices in Reston, Virginia.

Under the various names, the company operated a public network, and also sold its packet switching equipment to other carriers and to large enterprise networks.

Coverage

Originally, the public network had switching nodes in seven US cities:

* Washington, D.C. (network operations center as well as switching)
* Boston, MA
* New York, NY
* Chicago, IL
* Dallas, TX
* San Francisco, CA
* Los Angeles, CA

The switching nodes were fed by Telenet Access Controller (TAC) terminal concentrators both colocated and remote from the switches. By 1980, there were over 1000 switches in the public network. At that time, the next largest network using Telenet switches was that of Southern Bell, which had approximately 250 switches.

Internal Network Technology

The initial network used statically-defined hop-by-hop routing, using Prime commercial minicomputers as switches, but then migrated to a purpose-built multiprocessing switch based on 6502 microprocessors. Among the innovations of this second-generation switch was a patented arbitrated bus interface that created a switching fabric, a shared bus in modern terms, among the microprocessors.

Most interswitch lines ran at 56 kbit/s, with a few, such as New York-Washington, at T1 (i.e., 1.544 Mbit/s). The main internal protocol was a proprietary variant on X.75; Telenet also ran standard X.75 gateways to other packet switching networks.

Originally, the switching tables could not be altered separately from the main executable code, and topology updates had to be made by deliberately crashing the switch code and forcing a reboot from the network management center. Improvements in the software allowed new tables to be loaded, but the network never used dynamic routing protocols. Multiple static routes, on a switch-by-switch basis, could be defined for fault tolerance. Network management functions continued to run on Prime minicomputers.

Its X.25 host interface was the first in the industry and Telenet helped standardize X.25 in the CCITT.

Reverse telnet

Reverse telnet is a specialized application of telnet, where the server side of the connection reads and writes data to a TTY line (RS-232 serial port), rather than providing a command shell to the host device. Typically, reverse telnet is implemented on an embedded device (e.g. terminal/console server), which has an Ethernet network interface and serial port(s). Through the use of reverse telnet on such a device, IP-networked users can use telnet to access serially-connected devices.

In the past, reverse telnet was typically used to connect to modems or other external asynchronous devices. Today, reverse telnet is used mostly for connecting to the console port of a router, switch or other device.

Example

On the client, the command line for initiating a "reverse telnet" connection might look like this:

telnet 172.16.1.254 2002

(The syntax in the above example would be valid for the command-line telnet client packaged with many operating systems, including most Unices, or available as an option or add-on.)

In this example, 172.16.1.254 is the IP address of the server, and 2002 is the TCP port associated with a TTY line on the server.

A typical server configuration on a Cisco router would look like this:

version 12.3
service timestamps debug uptim
service timestamps log uptime
no service password-encryption
!
hostname Terminal_Server
!
ip host Router1 2101 8.8.8.8
ip host Router2 2102 8.8.8.8
ip host Router3 2113 8.8.8.8
!
!
interface Loopback0
description Used for Terminal Service
ip address 8.8.8.8 255.255.255.255
!
line con 0
exec-timeout 0 0
password MyPassword
login
line 97 128
transport input telnet
line vty 0 4
exec-timeout 0 0
password MyPassword
login
transport input none
!
end

Telnet - Current status

As of the mid-2000s, while the Telnet protocol itself has been mostly superseded for remote login, Telnet clients are still used, often when diagnosing problems, to manually "talk" to other services without specialized client software. For example, it is sometimes used in debugging network services such as an SMTP, IRC, HTTP, FTP or POP3 server, by serving as a simple way to send commands to the server and examine the responses.



This approach has limitations as Telnet clients speak is close to, but not equivalent to, raw mode (due to terminal control handshaking and the special rules regarding \377 and \15). Thus, other software such as nc (netcat) or socat on Unix (or PuTTY on Windows) are finding greater favor with some system administrators for testing purposes, as they can be called with arguments not to send any terminal control handshaking data. Also netcat does not distort the \377 octet, which allows raw access to TCP socket, unlike any standard-compliant Telnet software.



Telnet is popular with:

* enterprise networks to access host applications, e.g. on IBM Mainframes.
* administration of network elements, e.g., in commissioning, integration and maintenance of core network elements in mobile communication networks.
* MUD games played over the Internet, as well as talkers, MUSHes, MUCKs, MOOes, and the resurgent BBS community.
* embedded systems

Telnet Security

When Telnet was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much of a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension, the number of people attempting to crack other people's servers made encrypted alternatives much more of a necessity.

Experts in computer security, such as SANS Institute, and the members of the comp.os.linux.security newsgroup recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons:


* Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login and password information (and whatever else is typed) with any of several common utilities like tcpdump and Wireshark.

* Most implementations of Telnet have no authentication to ensure that communication is carried out between the two desired hosts and not intercepted in the middle.

* Commonly used Telnet daemons have several vulnerabilities discovered over the years.


These security-related shortcomings have seen the usage of the Telnet protocol drop rapidly, especially on the public Internet, in favor of the ssh protocol, first released in 1995. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be.

As has happened with other early Internet protocols, extensions to the Telnet protocol provide TLS security and SASL authentication that address the above issues. However, most Telnet implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes. The main advantage of TLS-Telnet would be the ability to use certificate-authority signed server certificates to authenticate a server host to a client that does not yet have the server key stored. In SSH, there is a weakness in that the user must trust the first session to a host when it has not yet acquired the server key.

Telnet - protocol details

Telnet is a client-server protocol, based on a reliable connection-oriented transport. Typically this protocol used to establish a connection to TCP port 23, where a getty-equivalent program (telnetd) is listening, although Telnet predates TCP/IP and was originally run on NCP.

Initially, Telnet was an ad-hoc protocol with no official definition. Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte with the high bit set was a special Telnet character.

On March 5th, 1973, a meeting was held at UCLA where "New Telnet" was defined in two NIC documents: Telnet Protocol Specification, NIC #15372, and Telnet Option Specifications, NIC #15373.

The protocol has many extensions, some of which have been adopted as Internet standards. IETF standards STD 27 through STD 32 define various extensions, most of which are extremely common. Other extensions are on the IETF standards track as proposed standards.

Telnet 5250

IBM 5250 or 3270 workstation emulation is supported via custom telnet clients, TN5250/TN3270, and IBM servers. Clients and servers designed to pass IBM 5250 data streams over Telnet generally do support SSL encryption, as SSH does not include 5250 emulation. Under OS/400, port 992 is the default port for secured telnet.

Telnet

Telnet (Telecommunication network) is a network protocol used on the Internet or local area network (LAN) connections. It was developed in 1969 beginning with RFC 15 and standardized as IETF STD 8, one of the first Internet standards.

The term telnet also refers to software which implements the client part of the protocol. Telnet clients are available for virtually all platforms. Most network equipment and OSes with a TCP/IP stack support some kind of Telnet service server for their remote configuration (including ones based on Windows NT). Because of security issues with Telnet, its use has waned as it is replaced by the use of SSH for remote access.

"To telnet" is also used as a verb meaning to establish or use a Telnet or other interactive TCP connection, as in, "To change your password, telnet to the server and run the passwd command".

Most often, a user will be telnetting to a Unix-like server system or a simple network device such as a router. For example, a user might "telnet in from home to check his mail at school". In doing so, he would be using a telnet client to connect from his computer to one of his servers. Once the connection is established, he would then log in with his account information and execute operating system commands remotely on that computer, such as ls or cd.

On many systems, the client may also be used to make interactive raw-TCP sessions. It is commonly believed that a telnet session which does not include an IAC (character 255) is functionally identical. This is not the case however due to special NVT (Network Virtual Terminal) rules such as the requirement for a bare CR (ASCII 13) to be followed by a NULL (ASCII 0).

SSH Port Forwarding


SSH is typically used for logging into remote servers so you have shell access to do maintenance, read your email, restart services, or whatever administration you require. SSH also offers some other native services, such as file copy (using scp and sftp) and remote command execution (using ssh with a command on the command line after the hostname).

Whenever we SSH from one machine to another, we establish a secure encrypted session. This first article in this SSH series at properly verifying a server's host key, so that we can be sure that no attacker is able to perform a man-in-the-middle attack and gain access to read or manipulate what we do in that session. Other articles in this series looked at removing the need for static passwords using SSH user identities, and then using ssh-agent to automate the task of typing passphrases.

SSH also has a wonderful feature called SSH Port Forwarding, sometimes called SSH Tunneling, which allows you to establish a secure SSH session and then tunnel arbitrary TCP connections through it. Tunnels can be created at any time, with almost no effort and no programming, which makes them very appealing. In this article we look at SSH Port Forwarding in detail, as it is a very useful but often misunderstood technology. SSH Port Forwarding can be used for secure communications in a myriad of different ways.

SSH tunneling

An SSH tunnel (sometimes referred to as a VPN) is an encrypted network tunnel created through an SSH connection. SSH is frequently used to tunnel insecure traffic over the Internet in a secure way. For example, Windows machines can share files using the SMB protocol, which is not encrypted. If you were to mount a Windows filesystem remotely through the Internet, someone snooping on the connection could see your files. To mount an SMB file system securely, one can establish an SSH tunnel that routes all SMB traffic to the fileserver inside an SSH-encrypted connection. Even though the SMB traffic itself is insecure, because it travels within an encrypted connection it becomes secure.

In order to create an SSH tunnel, the SSH client is configured to forward a specified remote port and IP address (that is accessible on the SSH server) to a port on the local machine. Once the SSH connection has been established, the user can connect to the specified local port to access the network services that would otherwise be available only at the remote IP address and port.

SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services provided that outgoing connections on port 22 are allowed. For example, many institutions prohibit users from accessing Internet web pages (port 80) directly without first being examined by a proxy/filter device. However, if users are able to connect to an external SSH server, it is possible for them to create an ssh tunnel to forward port 80 on an external web server to a given port (probably port 80) on their local machine, and thus access that web page by typing http://localhost in their browser.

More commonly, users may set up their own proxy server at home, using free software such as Squid, and construct a tunnel from their workstation to the proxy. Next, by configuring their browser to use localhost rather than the corporate proxy server, users can access any web page they want, bypassing their company's filters and firewalls.

Another method is to use dynamic port forwarding, which creates a local SOCKS 4/5 proxy server that a user can connect to, effectively creating an encrypted tunnel to the remote SSH server. The user can then configure his/her applications to use the SOCKS proxy server, usually for bypassing filters and firewalls.

Tunneling protocol

The term tunneling protocol is used to describe when one network protocol called the payload protocol is encapsulated within a different delivery protocol. Reasons to use tunnelling include carrying a payload over an incompatible delivery network, or to provide a secure path through an untrusted network.

Tunneling typically contrasts with a layered protocol model such as those of OSI or TCP/IP. The tunnel protocol is usually (but not always) at a higher level than the payload protocol, or at the same level. To understand a particular protocol stack, both the payload and delivery protocol sets must be understood. Protocol encapsulation that is carried out by conventional layered protocols, in accordance with the OSI model or TCP/IP model, for example HTTP over TCP over IP over PPP over a V.92 modem, should not be considered as tunneling.

As an example of network layer over network layer, Generic Routing Encapsulation (GRE), which is a protocol running over IP (IP Protocol Number 47), often is used to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are compatible, but the payload addresses are incompatible with those of the delivery network.

In contrast, an IP payload might believe it sees a data link layer delivery when it is carried inside the Layer 2 Tunneling Protocol, which appears to the payload mechanism as a protocol of the data link layer. L2TP, however, actually runs over the transport layer using User Datagram Protocol (UDP) over IP. The IP in the delivery protocol could run over any data link protocol from IEEE 802.2 over IEEE 802.3 (i.e., standards-based Ethernet) to the Point-to-Point Protocol (PPP) over a dialup modem link.

Tunneling protocols may use data encryption to transport insecure payload protocols over a public network such as the Internet thereby providing VPN functionality. IPSec has an end-to-end Transport Mode, but also can be operated in a Tunneling Mode through a trusted security gateway.

Common tunneling protocols

Examples of tunneling protocols include:

Datagram-based:

* IPsec
* GRE (Generic Routing Encapsulation) supports multiple protocols and multiplexing
* IP in IP Tunneling Lower overhead than GRE and used when only 1 IP stream is to be tunneled
* L2TP (Layer 2 Tunneling Protocol)
* MPLS (Multi-Protocol Label Switching)
* GTP (GPRS Tunnelling Protocol)
* PPTP (Point-to-Point Tunneling Protocol)
* PPPoE (point-to-point protocol over Ethernet)
* PPPoA (point-to-point protocol over ATM)
* IEEE 802.1Q (Ethernet VLANs)
* DLSw (SNA over IP)
* XOT (X.25 datagrams over TCP)
* IPv6 tunneling: 6to4; 6in4; Teredo
* Anything In Anything (AYIYA; e.g. IPv6 over UDP over IPv4, IPv4 over IPv6, IPv6 over TCP IPv4, etc.)

Stream-based:

* TLS
* SSH
* SOCKS
* HTTP CONNECT command
* Various circuit-level proxy protocols, such as Microsoft Proxy Server's Winsock Redirection Protocol, or WinGate Winsock Redirection Service.

If you want to see graphical representation for Tunneling protocal means check out the recent posts....

Patch cable or cord

Also known as a patch cord, a patch cord is a piece of copper wire or fiber optic cable that connects circuits on a patch panel.

A patch cable or patch cord (sometimes patchcable or patchcord) is an electrical or optical cable, used to connect ("patch-in") one electronic or optical device to another for signal routing. Devices of different types (ie: a switch connected to a computer, or switch to router) are connected with patch cords. Patch cords are usually produced in many different colours so as to be easily distinguishable, and are relatively short, perhaps no longer than two metres. Types of patch cords include microphone cables, headphone extension cables, XLR connector, RCA connector and ¼" TRS connector cables (as well as modular ethernet cables), and thicker, hose-like cords (snake cable) used to carry video or amplified signals. However, patch cords typically only refer to those short ones used with patch panels.

Patch cords can be as short as 3 inches or 8 cm, to connect stacked components, or route signals through a patch bay, or as much as twenty feet or 6 m or more in length for snake cables. As length increases, cables are usually thicker, and/or made with more shielding, to prevent signal loss (attenuation) and the introduction of unwanted radio frequencies and hum (electromagnetic interference).

Patch cords are often made of coaxial cables, with a positive or "hot" signal carried through a shielded core, and the negative electrical ground or earthed return connection carried through a wire mesh surrounding the core. Each end of the cable is attached to a connector, so the cord may be plugged in. Types of connectors may vary widely, particularly with adapting cables.

Patch cords may be:

* single-conductor wires using, for example, banana connectors
* coaxial cables using BNC connectors
* Ethernet Cat5, Cat5e, or Cat6 cables using "RJ-45" connectors with TIA/EIA-568-A or TIA/EIA-568-B wiring
* Optical fiber cables

A very short patch cable may be called a pigtail. These may be used, for example, to connect a wall-mounted telephone to the wallplate. The name may also be synonymous with a dongle if it is also an adapter.

Secure Shell FileSystem

SSHFS (Secure SHell FileSystem) is a file system for Linux (and other operating systems with a FUSE implementation, such as Mac OS X or FreeBSD) capable of operating on files on a remote computer using just a secure shell login on the remote computer. On the local computer where the SSHFS is mounted, the implementation makes use of the FUSE (Filesystem in Userspace) kernel module. The practical effect of this is that the end user can seamlessly interact with remote files being securely served over SSH just as if they were local files on his/her computer. On the remote computer the SFTP subsystem of SSH is used.

The current implementation of SSHFS using FUSE is a rewrite of an earlier version. The rewrite was done by Miklos Szeredi, who also wrote FUSE.

For Mac OS X, Google has also released (MacFUSE) an SSHFS binary. MacFusion offers a GUI to MacFUSE and a plug-in architecture; plug-ins include FTP and the SSHFS binary from the MacFUSE project.

The administrator can set up a jailed account on the server in order to provide greater security, then the client will see only a limited part of the filesystem.

Trivial File Transfer Protocol

Trivial File Transfer Protocol (TFTP) is a very simple file transfer protocol, with the functionality of a very basic form of FTP; it was first defined in 1980.

Since it is so simple, it is easy to implement in a very small amount of memory — an important consideration at that time. TFTP was therefore useful for booting computers such as routers which did not have any data storage devices. It is still used to transfer small files between hosts on a network, such as when a remote X Window System terminal or any other thin client boots from a network host or server.

TFTP is based in part on the earlier protocol EFTP, which was part of the PUP protocol suite. In the early days of work on the TCP/IP protocol suite, TFTP was often the first protocol implemented on a new host type, because it was so simple.

The original versions of TFTP, prior to RFC 1350, displayed a particularly bad protocol flaw, which was named Sorcerer's Apprentice Syndrome (after the Sorcerer's Apprentice segment of Fantasia) when it was discovered.

TFTP appeared first as part of 4.3 BSD. It is included with Mac OS X through at least version 10.5.

Recently, TFTP has been used by computer worms, such as Blaster, as a method of spreading and infecting new hosts.

Technical information

Some details of TFTP

* It uses UDP port 69 as its transport protocol (unlike FTP which uses TCP port 21).
* It cannot list directory contents.
* It has no authentication or encryption mechanisms.
* It is used to read files from, or write files to, a remote server.
* It supports three different transfer modes, "netascii", "octet" and "mail", with the first two corresponding to the "ASCII" and "image" (binary) modes of the FTP protocol; the third is obsoleted by RFC1350.
* The original protocol has a file size limit of 32 MB, although this was extended when RFC 2347 introduced option negotiation, which was used in RFC 2348 to introduce block-size negotiation in 1998 (allowing a maximum of 4 GB and potentially higher throughput). If the server and client support block number wraparound, file size is essentially unlimited.
* Since TFTP utilizes UDP, it has to supply its own transport and session support. Each file transferred via TFTP constitutes an independent exchange. That transfer is performed in lock-step, with only one packet (either a block of data, or an 'acknowledgement') ever in flight on the network at any time. Due to this lack of windowing, TFTP provides low throughput over high latency links.
* Due to the lack of security, it is dangerous over the open Internet. Thus, TFTP is generally only used on private, local networks.

Details of a TFTP session

* The initiating host A sends an RRQ (read request) or WRQ (write request) packet to host B at the well-known port number 69, containing the filename and transfer mode.
* B replies with an ACK (acknowledgement) packet to WRQ and directly with a DATA packet to RRQ. Packet is sent from a freshly allocated ephemeral port, and all future packets to host B should be to this port.
* The source host sends numbered DATA packets to the destination host, all but the last containing a full-sized block of data. The destination host replies with numbered ACK packets for all DATA packets.
* The final DATA packet must contain less than a full-sized block of data to signal that it is the last. If the size of the transferred file is an exact multiple of the block-size, the source sends a final DATA packet containing 0 bytes of data

FileZilla Server

FileZilla Server is a sister product of FileZilla Client. It is an FTP server supported by the same project and features support for FTP and FTP over SSL/TLS.

FileZilla Server is a free, open source FTP server for Microsoft Windows. Its source code is hosted on SourceForge.net.

Features

Filezilla Server supports FTP and FTPS (FTP over SSL/TLS). It includes numerous functionalities, including:

* Upload and download bandwidth limits
* Compression
* Encryption with SSL/TLS (for FTPS)
* Message log (for debugging and real-time traffic information)
* Limit access to internal LAN traffic or external internet traffic only

A user connections manager in FileZilla Server — displayed along the bottom of the window — allows the administrator to view currently connected users and their uploads/downloads. At present, there are two operations the owner of the server can do to those transfers — to "kill" the client session or to "ban" the user's IP address. This manager shows the real-time status of each active file transfer.

FileZilla a FTP client

FileZilla Client - free & open source FTP client

FileZilla Client (also referred to as FileZilla) is a free, open source, cross-platform FTP client. Binaries are available for Windows, Linux, and Mac OS X. It supports FTP, SFTP, and FTPS (FTP over SSL/TLS). As of June 20, 2008, it was the 10th most popular download of all time from SourceForge.net.

FileZilla Server is a sister product of FileZilla Client. It is an FTP server supported by the same project and features support for FTP and FTP over SSL/TLS.

FileZilla's source code is hosted on SourceForge. The project was featured as Project of the Month in November 2003.

Features and limitations

The main features are the site manager, message log, file and folder view, and the transfer queue.

The site manager allows a user to create a list of FTP sites along with their connection data, such as the port number to use, the protocol to use, and whether to use anonymous or normal logon. For normal logon, the username is saved and optionally the password.

The message log is displayed along the top of the window. It displays the console-type output showing the commands sent by FileZilla and the remote server's responses.

The file and folder view, displayed under the message log, provides a graphical interface for FTP. Users can navigate folders and view and alter their contents on both the local and remote machines using an Explorer-style tree interface. Users can drag and drop files between the local and remote computers.

The transfer queue, displayed along the bottom of the window, shows the real-time status of each queued or active file transfer.

As of version 2.2.23, FileZilla uses Unicode internally. As a result, it no longer runs on Windows 9x/ME.

Uploading

FTP mode: Date/timestamps attributes on uploaded files can only be retained if the server supports the MFMT command.

SFTP mode: The said attributes can be retained starting with FileZilla 3.0.8.

Downloading

Date/timestamps on downloaded files can only be retained if the partition whereupon you save them supports timestamps for file creation date and time. E.g. on FAT32/NTFS partitions you can keep in the download folder the original timestamps that the files have on the server. Only newer FileZilla versions support keeping timestamps, and this option has to be enabled from the menu.

History

FileZilla was started as a computer science class project in the second week of January 2001 by Tim Kosse and two classmates. Before they started to write the code, they discussed on which licence they should release the code. They decided to make FileZilla an open-source project, because there were already many FTP clients available and they didn't think that they would sell even one copy if they made FileZilla commercial.

The alpha version was released in late February 2001, and all required features were implemented by beta 2.1.

Version 3 of FileZilla introduced support for operating systems other than Windows, including Linux and Mac OS X.

SSH - file transfer protocol

SFTP client

The term SFTP can also refer to Secure file transfer program, a command-line program that implements the client part of this protocol, such as that supplied with OpenSSH.

The sftp program provides an interactive interface similar to that of traditional FTP clients.

Some implementations of the scp program actually use the SFTP protocol to perform file transfers; however, some such implementations are still able to fallback to the SCP protocol if the server does not provide SFTP service.

SFTP server

There are numerous SFTP server implementations both for UNIX and Windows. The most widely known is perhaps OpenSSH, but there are also proprietary implementations.

SFTP proxy

The adoption of SFTP is hindered somewhat because it is difficult to control SFTP transfers on security devices at the network perimeter. There are standard tools for logging FTP transactions, like TIS fwtk or SUSE FTP proxy, but SFTP is encrypted, rendering traditional proxies ineffective for controlling SFTP traffic.

There are some tools that implement man-in-the-middle for SSH which also feature SFTP control: such a tool is Shell Control Box from BalaBit. These provide SFTP transaction logging as well as logging the actual data transmitted on the wire.

SSH file transfer protocol

In computing, the SSH File Transfer Protocol (sometimes called Secure File Transfer Protocol or SFTP) is a network protocol that provides file transfer and manipulation functionality over any reliable data stream. It is typically used with version two of the SSH protocol (TCP port 22) to provide secure file transfer, but is intended to be usable with other protocols as well.

Capabilities

Compared to the earlier SCP protocol, which allows only file transfers, the SFTP protocol allows for a range of operations on remote files – it is more like a remote file system protocol. An SFTP client's extra capabilities compared to an SCP client include resuming interrupted transfers, directory listings, and remote file removal. For these reasons it is relatively simple to implement a GUI SFTP client compared with a GUI SCP client.

SFTP attempts to be more platform-independent than SCP; for instance, with SCP, the expansion of wildcards specified by the client is up to the server, whereas SFTP's design avoids this problem. While SCP is most frequently implemented on Unix platforms, there exists SFTP servers for most platforms.

SFTP is not FTP run over SSH, but rather a new protocol designed from the ground up by the IETF SECSH working group. It is sometimes confused with Simple File Transfer Protocol.

The protocol itself does not provide authentication and security; it expects the underlying protocol to secure this. SFTP is most often used as subsystem of SSH protocol version 2 implementations, having been designed by the same working group. However, it is possible to run it over SSH-1 (and some implementations support this) or other data streams. Running SFTP server over SSH-1 is not platform independent as SSH-1 does not support the concept of subsystems. An SFTP client willing to connect to an SSH-1 server needs to know the path to the SFTP server binary on the server side.

The Secure Internet Live Conferencing (SILC) protocol defines the SFTP as its default file transfer protocol. In SILC the SFTP data is not protected with SSH but SILC's secure packet protocol is used to encapsulate the SFTP data into SILC packet and to deliver it peer-to-peer. This is possible as SFTP is designed to be protocol independent.

For uploads, the transferred files may be associated with their basic attributes, such as timestamps. This is an advantage over the common FTP protocol, which does not have provision for uploads to include the original date/timestamp attribute.

Standardization

The protocol is not yet an Internet standard. The latest specification is an expired Internet Draft, which defines version 6 of the protocol. Currently the most widely used version is 3, implemented by the popular OpenSSH SFTP server. Many Microsoft Windows-based SFTP implementations use version 4 of the protocol, which lessened its ties with the Unix platform.

The Internet Engineering Task Force (IETF) "Secsh Status Pages" search tool contains links to all versions of the Internet draft-ietf-secsh-filexfer which describes this protocol.

TCP Wrapper

TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet Protocol servers on (Unix-like) operating systems such as Linux or BSD. It allows host or subnetwork IP addresses, names and/or ident query replies, to be used as tokens on which to filter for access control purposes.

The original code was written by Wietse Venema at the Eindhoven University of Technology, The Netherlands, between 1990 and 1995. As of June 1, 2001 the program is released under its own BSD-style license.

The tarball includes a library named libwrap that implements the actual functionality. Initially, only services that were spawned for each connection from a super-server (such as inetd) got wrapped, utilizing the tcpd program. However most common network service daemons today can be linked against libwrap directly. This is used by daemons that operate without being spawned from a super-server, or when a single process handles multiple connections. Otherwise, only the first connection attempt would get checked against its ACLs.

When compared to host access control directives often found in daemons' configuration files, TCP Wrappers have the benefit of runtime ACL reconfiguration (i.e. services don't have to be reloaded or restarted) and a generic approach to network administration.

This makes it easy to use for anti-Worm scripts, such as BlockHosts, DenyHosts or Fail2ban, to add and expire client-blocking rules, when excessive connections and/or many failed login attempts are encountered.

While originally written to protect TCP and UDP accepting services, examples of usage to filter on certain ICMP packets (such as 'pingd' – the userspace ping request responder) exist too.

Simple File Transfer Protocol

Simple File Transfer Protocol, as defined by RFC 913, was proposed as an (unsecured) file transfer protocol with a level of complexity intermediate between TFTP and FTP.

It was never widely accepted on the Internet, and is now assigned Historic status by the IETF.

It is sometimes confused with SSH file transfer protocol, a secured file transfer protocol.

It runs through port 115, and often receives the acronym of SFTP. It has a command set of 11 commands and support three types of data transmission: ASCII, BINARY and CONTINUOUS. For systems which have "WORD SIZE" which are multiples of 8 bits, the implementation of BINARY and CONTINUOUS is the same.

The protocol supports the following:

1. User id based login (User-id/Password combination)
2. Hierarchical folders
3. File Management (Rename, Delete, Upload, Download, Download with overwrite, Download with append)

The protocol does not support random access inside a file (required for resuming interrupted transfers).

SFTP may refer to:

Relating to file transfer:

* SSH file transfer protocol, a network protocol designed by the IETF to provide secure file transfer and manipulation facilities over the secure shell (SSH) protocol. This is typically meant in context of file transfer.
* FTP over SSH, the practice of running an FTP session over SSH, sometimes called Secure FTP. Rarely used, because FTP's normal 2-channel nature makes such tunneling hard.
* Simple File Transfer Protocol, an unsecured and rarely-used file transfer protocol from the early days of the Internet.
* Serial File Transfer Protocol, a protocol used to transfer files between a PC and an embedded device using RS-232 or similar serial protocols.

SFTP may also refer to:

* Screened fully-shielded twisted pair, a kind of network cable, in contrast to FTP and STP
* Science for the People, a U.S. left-wing organization and magazine
* Six Flags Theme Parks, chain of amusement parks and theme parks.

FTP servlet an intro

An FTP servlet is an intermediate application that resides between the FTP server and the FTP client. It works as a proxy interposed within client/server communications and helps to unload some of the computing power of the FTP server and distribute it to the FTP servlet. It also provides a firewall and proxy friendly file transfer environment by wrapping FTP traffic over HTTP. FTP traffic can be wrapped over HTTPs using a SSL certificate to provide enhanced security.

Architecture

FTP clients can connect to the FTP servlet through the Internet. In most cases FTP is wrapped over an application layer protocol. Most commonly used are HTTP (for easy, unencrypted transfers) or HTTPs (for encrypted transfers). The use of HTTPs requires an SSL certificate to be present at the site of the FTP servlet. A number of simultaneous connections can be made to the FTP servlet. The number of connections is restricted to the computing power of the server. The number of end-users supported through the number of connections is usually more. As all connected end-users aren’t “active” until they make a request from the server. Consequently, the number of end-users simultaneously online on the FTP server can be greater than the number of active connections supported by the FTP server.

Security

FTP servlets protect direct access to an FTP server from the outside world. The FTP servlet can be housed on the DMZ. The internal network can house the FTP server. Direct access from the outside can’t be initiated with the internal FTP server. For additional security, port forwarding can also be used to enhance security between the DMZ and internal network.

Issues and drawbacks

FTP servlets can only work with advanced FTP clients that support the wrapping of FTP over HTTP or HTTPs. There are a number of commercially available clients/FTP servlets that work in such a way.

List of FTP server return codes

CodeExplanation
100 Series: The requested action is being initiated, expect another reply before proceeding with a new command.
110 Restart marker replay . In this case, the text is exact and not left to the particular implementation; it must read: MARK yyyy = mmmm where yyyy is User-process data stream marker, and mmmm server's equivalent marker (note the spaces between markers and "=").
120 Service ready in nnn minutes.
125 Data connection already open; transfer starting.
150 File status okay; about to open data connection.
200 Command okay.
202 Command not implemented, superfluous at this site.
211 System status, or system help reply.
212 Directory status.
213 File status.
214 Help message.On how to use the server or the meaning of a particular non-standard command. This reply is useful only to the human user.
215 NAME system type. Where NAME is an official system name from the list in the Assigned Numbers document.
220 Service ready for new user.
221 Service closing control connection.
225 Data connection open; no transfer in progress.
226 Closing data connection. Requested file action successful (for example, file transfer or file abort).
227 Entering Passive Mode (h1,h2,h3,h4,p1,p2).
228 Entering Long Passive Mode (long address, port).
229 Entering Extended Passive Mode (|||port|).
230 User logged in, proceed. Logged out if appropriate.
231 User logged out; service terminated.
232 Logout command noted, will complete when transfer done.
250 Requested file action okay, completed.
257 "PATHNAME" created.
331 User name okay, need password.
332 Need account for login.
350 Requested file action pending further information
421 Service not available, closing control connection. This may be a reply to any command if the service knows it must shut down.
425 Can't open data connection.
426 Connection closed; transfer aborted.
434 Requested host unavailable.
450 Requested file action not taken.
451 Requested action aborted. Local error in processing.
452 Requested action not taken. Insufficient storage space in system.File unavailable (e.g., file busy).
500 Syntax error, command unrecognized. This may include errors such as command line too long.
501 Syntax error in parameters or arguments.
502 Command not implemented.
503 Bad sequence of commands.
504 Command not implemented for that parameter.
530 Not logged in.
532 Need account for storing files.
550 Requested action not taken. File unavailable (e.g., file not found, no access).
551 Requested action aborted. Page type unknown.
552 Requested file action aborted. Exceeded storage allocation (for current directory or dataset).
553 Requested action not taken. File name not allowed.

List of file transfer protocols

Primarily used with TCP/IP

* Apple Filing Protocol (AFP)
* FTAM
* FTP
* FTPS
* HTTP
* HTTPS
* rcp
* SSH file transfer protocol (SFTP)
* Secure copy (SCP)
* Simple File Transfer Protocol
* rsync

Primarily used with UDP

* Trivial File Transfer Protocol
* File Service Protocol
* UFTP – UDP Based FTP with Multicast
* Multicast File Transfer Protocol
* Tsunami

Primarily used with direct modem connections

* ASCII dump
* BiModem
* CModem
* Compuserve B (aka B protocol or CIS-B)
* JMODEM
* HMODEM
* HSLINK
* HyperProtocol

* Kermit and variants:
o Kermit
o SuperKermit

* LeechModem
* Lynx (protocol)
* MEGAlink (protocol)
* MPt (Puma)
* NMODEM
* Punter family

* SEAlink
* SuperK
* TELINK
* Tmodem

* UUCP and variants:
o UUCP
o UUCP-g

* XMODEM and variants:
o MODEM7 (Batch XMODEM)
o XMODEM, XMODEM-1K, XMODEM-G
o WXMODEM

* YMODEM and variants:
o YMODEM, YMODEM-1K, YMODEM-G

* ZMax
* ZMODEM

List of FTP commands

Below is a list of FTP commands that may be sent to an FTP host, including all commands that are standardized in RFC 959 by the IETF. All commands below are RFC 959 based unless stated otherwise. These commands differ in use between clients. For example, GET is used instead of RETR, but most clients parse this into the proper command. In this, GET is the user command and RETR is the raw command.

* ABOR - Abort an active file transfer.
* ACCT - Account information.
* ADAT - Authentication/Security Data (RFC 2228)
* ALLO - Allocate sufficient disk space to receive a file.
* APPE - Append.
* AUTH - Authentication/Security Mechanism (RFC 2228)
* CCC - Clear Command Channel (RFC 2228)
* CDUP - Change to Parent Directory.
* CONF - Confidentiality Protection Command (RFC 697)
* CWD - Change working directory.
* DELE - Delete file.
* ENC - Privacy Protected Channel (RFC 2228)
* EPRT - Specifies an extended address and port to which the server should connect. (RFC 2428)
* EPSV - Enter extended passive mode. (RFC 2428)
* FEAT - Get the feature list implemented by the server. (RFC 2389)
* HELP - Returns usage documentation on a command if specified, else a general help document is returned.
* LAND - Language Negotiation (RFC 2640)
* LIST - Returns information of a file or directory if specified, else information of the current working directory is returned.
* LPRT - Specifies a long address and port to which the server should connect. (RFC 1639)
* LPSV - Enter long passive mode. (RFC 1639)
* MDTM - Return the last-modified time of a specified file. (RFC 3659)
* MIC - Integrity Protected Command (RFC 2228)
* MKD - Make directory.
* MLSD - Provides data about exactly the object named on its command line, and no others. (RFC 3659)
* MLST - Lists the contents of a directory if a directory is named. (RFC 3659)
* MODE - Sets the transfer mode (Stream, Block, or Compressed).
* NLST - Returns a list of file names in a specified directory.
* NOOP - No operation (dummy packet; used mostly on keepalives).
* OPTS - Select options for a feature. (RFC 2389)
* PASS - Authentication password.
* PASV - Enter passive mode.
* PBSZ - Protection Buffer Size (RFC 2228)
* PORT - Specifies an address and port to which the server should connect.
* PWD - Print working directory. Returns the current directory of the host.
* QUIT - Disconnect.
* REIN - Re initializes the connection.
* REST - Restart transfer from the specified point.
* RETR - Retrieve (download) a remote file.
* RMD - Remove a directory.
* RNFR - Rename from.
* RNTO - Rename to.
* SITE - Sends site specific commands to remote server.
* SIZE - Return the size of a file. (RFC 3659)
* SMNT - Mount file structure.
* STAT - Returns the current status.
* STOR - Store (upload) a file.
* STOU - Store file uniquely.
* STRU - Set file transfer structure.
* SYST - Return system type.
* TYPE - Sets the transfer mode (ASCII/Binary).
* USER - Authentication username.

File-sharing program

A file-sharing program is used to directly or indirectly transfer files from one computer to another computer over a network (e.g. the Internet). While the term may be used to describe client-server disk sharing (also known as shared file access or disk mounting), it is more commonly used to describe file sharing using the peer-to-peer (P2P) model.

Peer-to-peer file sharing typically operates using a network, such as Gnutella or BitTorrent. There are trade offs to using one network over another network. A variety of file-sharing programs are available on these different networks. It is common for commercial file sharing clients to contain abrasive advertising software or spyware.

Categories of clients

* Centralized Clients: OpenNap
o Benefits: Faster searching and downloading
o Negatives: Often more vulnerable to legal and DDOS attacks

* Decentralized clients: Gnutella
o Benefits: Usually more reliable and rarely shut down
o Negatives: Generally slower than centralized systems

* Decentralized tracker-based clients: BitTorrent
o Benefits: Very fast due to concentration of bittorrent networks on a single file, is principally used to offer new, large files for download, many tracker sites available
o Negatives: Not centrally searchable, tracker sites are often closed down from legal suits or fail, not truly anonymous

* Multi-network clients
o Benefits: allows connection to more than one network, almost always on the client side.
o Negatives: often playing catch-up to individual networks' changes and updates.

* Anonymous peer-to-peer: Freenet, GNUnet, MUTE, I2P
o Benefits: allows for the uncensored free flow of information and ideas
o Negatives: due to anonymity it allows for questionable or illegal material to be exchanged easier than other networks, often slower than regular p2p because of the overhead

* Private file-sharing networks

Music Industry

A number of studies have found that file sharing has a negative impact on record sales. Examples of such studies include three papers published in the April 2006 issue of the Journal of Law and Economics (Liebowitz, Rob and Waldfogel, Zentner). Alejandro Zentner notes in another paper published in 2005, that music sales have globally dropped from approximately $38 billion in 1999 to $32 billion in 2003, and that this downward trend coincides with the advent of Napster in June of 1999. Using aggregate data Stan J. Liebowitz argues in a series of papers (2005, 2006) that file sharing had a significant negative impact on record sales.

However, a widely cited paper published in February 2007 concludes that file sharing has no negative effect on CD sales. This paper by Olberholzer-Gee and Strumpf, was published in the Journal of Political Economy, and is the only paper which analyzes actual downloads on file sharing networks. Data gathered from tracking downloading on OpenNap servers indicates that most users logged on very rarely and when they did log on they only downloaded a little more than one CD’s worth of songs. To show how these downloads affected album sales they tracked sales and downloads of 500 random albums of varying genres and after doing so found that illegal downloads would only be a small force in the decrease in album sales, possibly even slightly improving album sales of the top albums in stores at the time.

CNET News.com staff writer John Borland reports, “even high levels of file-swapping seemed to translate into an effect on album sales that was "statistically indistinguishable from zero". Some researchers believe that massive copying has been occurring ever since the invention of tape cassettes and the increased economic impact of simpler access to copying provided by computer networks does not seem to have been large.[citation needed].

In March 2007 the Wall Street Journal found that CD sales have dropped 20 percent in one year, which the Wall Street Journal interpreted as the latest sign of the shift in the way people acquire their music. BigChampagne LLC has reported that around one billion songs a month are being traded on illegal file-sharing networks. As a result of this decline in CD sales, a significant amount of record stores are going out of business and “...making it harder for consumers to find and purchase older titles in stores.”

The debate on how file sharing has impacted on the legal sale of music, especially CDs, is underlined by figures showing a decline in music or record stores. According to an article published by the Almighty Institute of Music Retail, an estimation of 900 independent record stores have closed since 2003, leaving 2,700 stores in the USA. Carolyn Draving, the owner of the record store Trac Records, which is closed after 32 years, believes the downfall is a direct result of the illegal internet downloads. She explains that she lost many long-time consumers to the internet and knows for certain that a few stopped coming in because they just downloaded instead. Another owner, Warren Greene of Spinsters Records, claims that nobody buys CD’s anymore and that most of his customers have turned to the internet in order to obtain their music.

Economic impact

As files sharing has spread a debate on how the infringement of copyright (in terms of file sharing copyrighted audio and visual content) impacts on legal distribution of especially music. In a broader context commentators have pointed out that the music industry, along with other types of media such as film and TV are having a difficult time adapting to the digital age.

Software Industry

According to Moisés Naím, even in countries and regions with high intellectual property enforcement standards, such as the US or the EU, piracy rates of one-quarter or more for popular software and operating systems are common. The pirated software is distributed through file sharing at unprecedented rates, and according to Naím, software manufacturers dread the "one disc" effect: a phenomenon in which a single counterfeited copy can be propagated until it has taken over an entire country, pushing the legitimate product out of the market.

Copyright issues

File sharing has grown in popularity with the proliferation of high-speed Internet connections, and the relatively small file size and high-quality MP3 audio format. File sharing is a legal technology with legal uses, however many users use it to give and accept copyrighted materials without permission or authorization, and this is viewed by some as piracy of intellectual property, also known as copyright infringement.

Despite the existence of various international treaties, there are still sufficient variations between countries to cause significant difficulties in the protection of copyright. Recent years have seen copyright owners challenging file sharing networks, leading to litigation by industry bodies against private individual file sharers. The legal issues surrounding file sharing have been the subject of debate and conferences, especially among lawyers in the entertainment industries.

The challenges facing copyright holders in the face of file sharing systems highlight that current copyright law and enforcement may not be sufficient in dealing with rapidly developing new technologies and uses. Other challenges include ambiguities in the interpretation of copyright law and varying copyright legislations. The high number of individuals engaged in file sharing of copyrighted material means that copyright holders face problems relating to mass litigation and the development of processes for evidence and discovery.

File sharing technology has evolved in response to legal challenges. There is a low technical barriers to entry for would-be sharers, and many file sharing approaches now obfuscate or hide the fact that sharing is happening, or the identities of those involved. For example: encryption and darknets. Furthermore it is contested whether the transfer of segmented files constitutes copyright infringement in itself based on existing laws.

Further challenges have arisen because of the need to balance self-protection against fair use. A perceived overbalance towards protection (in the form of media that cannot be backed up, cannot be played on multiple systems by the owner, or contains rootkits or irksome security systems inserted by manufacturers), has led to a backlash against protection systems in some quarters. For example, the first crack of AACS was inspired by a perceived unfair restriction on owner usage.

The fourth P2P-Generation

Streams over P2P

Apart from the traditional file sharing there are services that send streams instead of files over a P2P network. Thus one can hear radio and watch television without any server involved -- the streaming media is distributed over a P2P network. It is important that instead of a treelike network structure, a swarming technology known from BitTorrent is used. Best examples are Peercast, Miro, Cybersky and demo TV.

General

* Broadcatching
* Podcast

Tree structure

* CoolStreaming
* Peercast

Swarm structure such as BitTorrent

* Djingle
* Icecast
* Joost
* MediaBlog
* PeerCast
* PPLive
* PPStream
* SopCast
* TVUPlayer
* Vuze

Third P2P-Generation

indirect and encrypted

The third generation of peer-to-peer networks are those that have anonymity features built in. Examples of anonymous networks are ANts P2P, RShare, Freenet, I2P, GNUnet and Entropy.

A degree of anonymity is realized by routing traffic through other users' clients, which have the function of network nodes. This makes it harder for someone to identify who is downloading or who is offering files. Most of these programs also have strong encryption to resist traffic sniffing.

Friend-to-friend networks only allow already-known users (also known as "friends") to connect to the user's computer, then each node can forward requests and files anonymously between its own "friends'" nodes.

Third-generation networks have not reached mass usage for file sharing because most current implementations incur too much overhead in their anonymity features, making them slow or hard to use. However, in countries where very fast fiber-to-the-home Internet access is commonplace, such as Japan, a number of anonymous file-sharing clients have already reached high popularity.

An example might be: Petra gives a file to Oliver, then Oliver gives the file to Anna. Petra and Anna thus never become acquainted and thus are protected. Often used virtual IP addresses obfuscate the user's network location because Petra only knows the virtual IP of Anna. Although real IP's are always necessary to establish a connection between Petra and Oliver, nobody knows if Anna really requested and Petra really send the file or if they just forward it (As long as they won't tell anyone their virtual IP's!). Additionally all transfers are encrypted, so that even the network administrators cannot see what was sent to whom. Example software includes WASTE, JetiANts, Tor and I2P. These clients differ greatly in their goals and implementation. WASTE is designed only for small groups and may therefore be considered Darknet; ANts and I2P are public Peer-to-Peer systems, with anonymization provided exclusively by routing reach.

Ants network

* ANts P2P
* JetiANts
* Hornet

Mute network

* MUTE
* Kommute - KDE

I2P network

* I2P
* I2Phex - Gnutella over I2P
* iMule - eDonkey (Kademlia) over I2P
* Azureus - has I2P plugin

Retroshare-Network (F2F Instant Messenger)

* Retroshare Instant Messenger - Retroshare Chat Messenger for privacy of filesharing

other networks or clients

* Alliance
* Freenet
* GNUnet
* Nodezilla
* OFF System
* Perfect Dark
* Proxyshare
* RShare
* Share
* Tor
* WinNY
* Zultrax

Second P2P-Generation

Decentralization

After Napster encountered legal troubles, Justin Frankel of Nullsoft set out to create a network without a central index server, and Gnutella was the result. Unfortunately, the Gnutella model of all nodes being equal quickly died from bottlenecks as the network grew from incoming Napster refugees. FastTrack solved this problem by having some nodes be 'more equal than others'.

By electing some higher-capacity nodes to be indexing nodes, with lower capacity nodes branching off from them, FastTrack allowed for a network that could scale to a much larger size. Gnutella quickly adopted this model, and most current peer-to-peer networks implement this design, as it allows for large and efficient networks without central servers.

Also included in the second generation are distributed hash tables (DHTs), which help solve the scalability problem by electing various nodes to index certain hashes (which are used to identify files), allowing for fast and efficient searching for any instances of a file on the network. This is not without drawbacks; perhaps most significantly, DHTs do not directly support keyword searching (as opposed to exact-match searching).

The best examples are Gnutella, Kazaa or eMule with Kademlia, whereby Kazaa has still a central server for logging in. eDonkey2000/Overnet, Gnutella, FastTrack and Ares Galaxy have summed up approx. 10.3 million users (as of April 2006, according to slyck.com). This number does not necessarily correspond to the actual number of persons who use these networks; it must be assumed that some use multiple clients for different networks.

Multi-Network-Clients

Further networks or clients

Web-based sharing

Webhosting is also used for file-sharing, since it makes it possible to exchange privately. In small communities popular files can be distributed very quickly and efficiently. Web hosters are independent of each other; therefore contents are not distributed further. Other terms for this are one-click hosting and web-based sharing.

File Sharing On The Social Graph

Recently, Facebook opened up its API to 3rd party developers that has allowed for a new type of file-sharing service to emerge. Box.net and FreeDrive.com are two examples of companies that have specific Facebook Applications that allow file sharing to be easily accomplished between friends.

Server-client-protocols

* Audiogalaxy - Service ended in the middle of 2002.
* Direct Connect
* Napster - Closed in its original form in July 2001, since changed to a fee-based service.
* Scour Exchange - The second exchange network after Napster. No longer exists.
* Soulseek - Still popular today despite being relatively old, with more than 120,000 users online at any time.
* TinyP2P - 15 lines Python - SOURCE code
* WinMX - The original Frontcode servers were switched off in September 2005, but alternate servers can be used by installing a Software Patch.