Packet switching

Packet switching is a network communications method that groups all transmitted data, irrespective of content, type, or structure into suitably-sized blocks, called packets. The network over which packets are transmitted is a shared network that routes each packet independently from all others and allocates transmission resources as needed. Principal goals of packet switching are to optimize utilization of available link capacity and to increase robustness of communication.

Network resources are managed by statistical multiplexing or dynamic bandwidth allocation in which a physical communication channel is effectively divided into an arbitrary number of logical variable-bit-rate channels or data streams. Each logical stream consists of a sequence of packets, which normally are forwarded by a network node asynchronously in a first-in, first-out fashion. Alternatively, the packets may be forwarded according to some scheduling discipline for fair queuing or for differentiated or guaranteed quality of service. In case of a shared physical medium, the packets may be delivered according to some packet-mode multiple access scheme. When traversing network nodes, packets are buffered and queued, resulting in variable delay and throughput, depending on the traffic load in the network.

Packet switching contrasts with another principal networking paradigm, circuit switching, a method which sets up a specific circuit with a limited number dedicated connection of constant bit rate and constant delay between nodes for exclusive use during the communication session.

Packet mode (or packet-oriented, packet-based) communication may be utilized with or without intermediate forwarding nodes (packet switches).

No comments:

Post a Comment