Sabtu, 16 Juni 2018

Sponsored Links

ERROR DETECTION AND CORRECTION IN COMPUTER NETWORK - YouTube
src: i.ytimg.com

In information theory and coding theory with applications in computer science and telecommunications, error detection and correction or error control is a technique that enables reliable digital data transmission over non-communication channels reliable. Many communication channels are subject to channel interference, and thus faults can occur during transmission from the source to the receiver. Error detection techniques allow detecting such errors, while error correction allows the reconstruction of the original data in most cases.


Video Error detection and correction



Definition

Error detection is the error detection caused by interference or other interruption during transmission from the transmitter to the receiver. Error correction is the error detection and reconstruction of the original data, without error.

Maps Error detection and correction



History

The development of the modern error correction code in 1947 was due to Richard W. Hamming. Hamming code explanation appeared in Claude Shannon's Claude's Theory of Communication and was quickly generalized by Marcel J. E. Golay.

Error Detection & Correction - ppt download
src: slideplayer.com


Introduction

The general idea to achieve error detection and correction is to add some redundancy (that is, some additional data) to the message, which the recipient can use to check the consistency of the sent messages, and to recover the data that has been determined to be tampered with. The error-detection and correction scheme can be either systematic or non-systematic: In a systematic scheme, the transmitter sends the original data, and attaches a fixed number of check bits (or parity data ), which derived from data bits by some deterministic algorithms. If only error detection is required, the receiver can only apply the same algorithm to the received data bit and compare its output to the received check bit; if the value does not match, an error has occurred at some point during transmission. In systems that use non-systematic code, the original message is converted to an encoded message that has at least many bits like the original message.

A good error control performance requires a scheme to be selected based on the characteristics of the communication channel. The common channel model includes a memoryless model in which errors occur randomly and with certain probabilities, and dynamic models where errors occur primarily in bursts. As a result, error detection and correction codes can generally be distinguished between random-error detection/correction and detection-error-detecting/correcting . Some code can also be suitable for a mixture of random errors and burst errors.

If the channel capacity can not be determined, or varies greatly, the error detection scheme can be combined with the system for retransmission of the wrong data. This is known as an automated repetition request (ARQ), and is most commonly used on the Internet. An alternative approach to error control is hybrid automatic repeat query (HARQ), which is a combination of ARQ and error correction coding.

Error Detection and Correction using Hamming code - Simplified ...
src: i.ytimg.com


Implementation

Error corrections can generally be realized in two different ways:

  • Automatic recurrence request (ARQ) (sometimes also referred to as reverse error correction ): This is a error control technique in which error detection schemes are combined with retransmission requests wrong data. Each data block received is checked using the error detection code used, and if the check fails, the data resend is requested - this can be done repeatedly, until the data can be verified.
  • Forward error correction (FEC) : The sender encodes data using error correction code (ECC) before transmission. Additional information (redundancy) added by the code is used by the recipient to recover the original data. In general, the reconstructed data is what is considered the most likely "original data".

ARQ and FEC can be combined, so that minor errors are fixed without retransmission, and major errors are corrected through requests for retransmissions: this is called automatic hyper repeat-request (HARQ) .

9장 Error Detection and Correction - ppt download
src: slideplayer.com


Error detection scheme

Error detection is most often realized using the appropriate hash function (or checksum algorithm). The hash function adds a fixed-length tag to the message, allowing the recipient to verify the sent message by recalculating the tag and comparing it to the one provided.

There are different designs of different hash functions. However, some usage is very broad because of their simplicity or suitability to detect certain types of errors (eg, cyclic redundancy check performance in detecting blowout errors).

Random error-correcting codes based on minimum distance encoding can provide strict guarantees on the number of errors detected, but can not protect against preimage attacks. The loop code, described in the sections below, is a special case for error correction codes: although somewhat inefficient, the loop code fits in some error correction and detection applications because of its simplicity.

Repeat code

The loop code is an encoding scheme that repeats bits in the channel to achieve error-free communication. With the flow of data to be sent, the data is divided into several bit blocks. Each block is transmitted several times as specified. For example, to send a bit pattern "1011", a four-bit block can be repeated three times, resulting in "1011 1011 1011". However, if this twelve bit pattern is accepted as "1010 1011 1011" - where the first block is unlike the other two - it can be determined that an error has occurred.

The loop code is very inefficient, and can be vulnerable to problems if errors occur in the exact same place for each group (for example, "1010 1010 1010" in the previous example will be detected as true). The advantage of looping code is that they are very simple, and in fact are used in some station number transmissions.

Parity bit

A parity bit is a bit added to a bunch of source bits to ensure that the number of bits assigned (that is, bits with value 1) in the result is even or odd. This is a very simple scheme that can be used to detect single or other odd numbers (eg, Three, five, etc.) of errors in output. Even the number of bits inverted will make the parity bit look correct even if the data is wrong.

Extensions and variations on parity bit mechanisms are horizontal redundancy checks, vertical redundancy checks, and "double", "double", or "diagonal" parity (used in RAID-DP).

Checksums

A checksum message is the number of modular arithmetic word code messages from the length of the specified word (for example, byte value). The amount can be negated by using a complement operation 'one before transmission to detect errors that generate all-zero messages.

Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing or remembering identification numbers.

A cyclic redundancy check (CRC) is an insecure hash function designed to detect accidental changes to digital data on a computer network; as a result, it is not suitable for detecting errors that are introduced with malicious. This is characterized by a specification of what is called a polynomial generator, which is used as a divisor in the division of polynomial length over a finite plane, taking the input data as dividend, so the rest becomes the result.

The cyclic code has a favorable property that makes it suitable for detecting burst errors. CRCs are very easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives.

Even parity is a special case of cyclic redundancy checks, where CRC bits are generated by a divisor x 1.

Cryptographic hash function

The output of the hash cryptography function, also known as the message digest , can provide a strong guarantee of data integrity, whether accidental data changes (eg, due to fault transmission) or malicious introduced. Any modifications to the data are likely to be detected through an unmatched hash value. Furthermore, given some hash values, it is not possible to find some input data (other than a given) that will produce the same hash value. If an attacker can change not only the message but also the hash value, the key hash or the message authentication code (MAC) can be used for additional security. Without knowing the key, it is impossible for an attacker to easily or conveniently calculate the correct key hash value for a modified message.

Error correction code

Any error correction code can be used for error detection. Codes with minimum Hamming distance , d , can detect up to d - 1 error in the code word. Using a minimum distance-based error-correction code for error detection can be appropriate if strict limits on the minimum number of detected errors are desired.

Code with minimum Hamming distance d = 2 is error-correcting error code, and can be used to detect single error. The parity bit is an example of a single error detection code.

Error Detection and Correction - 100 Rules - Part 1 - हिंदी ...
src: i.ytimg.com


Error correction

Automatic recurrence request (ARQ)

Automatic Repeat reQuest (ARQ) is a error control method for data transmission using error detection codes, acknowledgment and/or negative acknowledgment messages, and time-out to achieve reliable data transmission. Recognition is a message sent by the recipient to indicate that he has received the data frame correctly.

Normally, when the transmitter does not receive recognition before the time limit occurs (ie, within a reasonable time after sending the data frame), the transmitter will retransmit the frame until it is received correctly or the error continues beyond the specified retransmission count..

Tiga tipe protokol ARQ adalah Stop-and-wait ARQ, Go-Back-N ARQ, dan Selective Repeat ARQ.

ARQs are appropriate if the communication channel has a varied or unknown capacity, just like on the Internet. However, ARQ requires back channel availability, results in the possibility of increased latency due to retransmission, and requires the maintenance of buffers and timers for retransmission, which in case of network congestion can overwhelm the server and network capacity as a whole.

For example, ARQ is used on short wave radio data connections in the form of ARQ-E, or combined with multiplexing as ARQ-M.

Error correction code

The error correction code (ECC) or error correction code (FEC) is the process of adding redundant data, or parity data , to the message, so that it can be recovered by the receiver even when a number of errors (until code capability is used) are introduced , either during the transmission process, or on storage. Because the recipient does not have to ask the sender to resend the data, the backchannel is not required in forward error correction, and therefore suitable for simplex communication such as broadcasting. Error correction codes are often used in lower layer communications, as well as for reliable storage on media such as CDs, DVDs, hard disks, and RAM.

Error correction codes are usually distinguished between convolutional codes and block codes:

  • Convolucional codes are processed bit-by-bit. They are very suitable to be implemented in hardware, and Viterbi decoder enables optimal decoding.
  • Block code is processed by block-by-block. Early examples of block codes are loop codes, Hamming codes and multidimensional parity check codes. They are followed by a number of efficient code, the Reed-Solomon code being the most famous for its wide use today. Turbo code and low density parity checking code (LDPC) is a relatively new construction that can provide almost optimal efficiency.

Shannon's theorem is an important theorem in forward error correction, and describes the maximum information level in which reliable communication is possible through channels that have a certain probability of error or signal-to-sound ratio (SNR). This tight upper bound is expressed in channel capacity. More specifically, the theorem says that there are codes such that with an increase in the length of encoding the probability of errors on discrete discrete channels can be made as small as possible, as long as the code rate is less than the channel capacity. The code value is defined as the k/n fraction of the k source symbol and n encoded symbol.

The actual maximum allowed code level depends on the error correction code used, and may be lower. This is because Shannon's proof is only existential, and does not show how to build optimal code and has efficient encoding and decoding algorithms.

Hybrid scheme

ARQ Hybrid is a combination of ARQ and forward error correction. There are two basic approaches:

  • Messages are always transmitted with FEC parity data (and redundancy error detection). The recipient translates the message using parity information, and requests retransmission using ARQ only if parity data is insufficient for successful coding (identified through failed integrity checks).
  • Messages are transmitted without parity data (only with error detection information). If the receiver detects an error, it requests FEC information from the transmitter using ARQ, and uses it to reconstruct the original message.

This last approach is very interesting on the eraser channel when using baseless eraser code.

Error Detection & Correction - ppt download
src: slideplayer.com


Apps

Apps that require low latency (such as phone conversations) can not use Automatic re-request (ARQ); they must use forward error correction (FEC). By the time the ARQ system finds an error and sends it back, the retransmitted data will arrive too late to become a good item.

Applications where the transmitter immediately forgets the information as soon as it is sent (like most television cameras) can not use ARQ; they must use FEC because when the error occurs, the original data is no longer available. (This is also why FEC is used in data storage systems such as RAID and data store distribution).

Applications that use ARQ must have return channels; apps that do not have a back channel can not use ARQ. Applications that require very low error rates (such as digital money transfers) should use ARQ. Reliability and inspection engineering also use error correction code theory.

Internet

In a typical TCP/IP stack, error controls are performed on various levels:

  • Each Ethernet frame carries CRC-32 checksums. Frame received with wrong checksum removed by recipient hardware.
  • The IPv4 header contains a checksum that protects the contents of the header. Packages with inappropriate checksums are dropped in the network or at the recipient.
  • Checkums are removed from IPv6 headers to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819).
  • UDP has an optional checksum that includes payload and address information from UDP and IP headers. Packages with incorrect checksum are removed by the operating system network stack. Checkum is optional under IPv4, only, because the Data Link checksum layer may already provide the desired level of error protection.
  • TCP provides checksums for protecting payloads and addressing information from TCP and IP headers. Packages with incorrect checksums are discarded in the network stack, and can eventually be resubmitted using ARQ, either explicitly (such as via triple-ack) or implicitly due to timeout.

Space telephony

The development of error correction codes is closely coupled with the history of space missions due to extreme dilution of signal strength over interplanetary distances, and limited power availability in outer space. While the initial mission sent their data is not coded, starting from 1968 digital error correction is implemented in the form of convolutional code (sub-optimal decodeed) and Reed-Muller code. The Reed-Muller code is perfect for the sound of the spacecraft subject (roughly according to the bell curve), and implemented on the Mariner spacecraft for missions between 1969 and 1977.

Voyager 1 and Voyager 2 missions, which began in 1977, are designed to provide color imaging between Jupiter and Saturn scientific information. This results in an increase in encoding requirements, and thus the spacecraft is supported by convolutional code (optimally Viterbi-decode) which can be combined with the outer Golay code (24.128,8).

The Voyager 2 craft also supports the implementation of the Reed-Solomon code: the combined Reed-Solomon-Viterbi (RSV) code allows for very powerful error correction, and allows long spacecraft travel to Uranus and Neptune. Both crafts use the V2 RSV encoding due to the ECC system upgrade after 1989.

CCSDS currently recommends the use of error correction codes with performance similar to Voyager 2 RSV code as a minimum. The serialized code is increasingly disliked by space missions, and is replaced by stronger codes like Turbo codes or LDPC codes.

The different types of space and orbital missions performed show that trying to find a "one size fits all" error correction system will be a sustainable problem for some time to come. For missions close to Earth, the noise properties in communication channels are different from what the spacecraft does on the interplanetary mission experience. In addition, as the spacecraft increases its distance from Earth, the problem of correcting for greater noise.

Satellite broadcasting (DVB)

The demand for satellite transponder bandwidth continues to increase, driven by the desire to transmit television (including new channels and High Definition TVs) and IP data. The availability of transponders and bandwidth limitations has limited this growth, since transponder capacity is determined by the selected modulation scheme and Forward Fault correction rate (FEC).

Overview

  • QPSK combined with traditional Reed Solomon and Viterbi codes has been used for nearly 20 years for digital satellite TV sending.
  • Higher order modulation schemes such as 8PSK, 16QAM, and 32QAM have enabled the satellite industry to increase transponder efficiency by several fold.
  • This increased level of information in transponders comes at the expense of increasing carrier power to meet threshold requirements for existing antennas.
  • Tests performed using the latest chipsets show that performance achieved using Turbo Code may be lower than the 0.8 dB figure assumed in the initial design.

Data storage

Error and correction detection codes are often used to improve the reliability of data storage media. A "parity track" was present at the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in the recording of group codes not only detects but also corrects single bit errors. Some file formats, especially archive formats, include checksums (most often CRC32) to detect corruption and truncation and can use redundancy and/or parity files to recover parts of damaged data. Reed Solomon codes are used in compact discs to correct errors caused by scratches.

Modern hard drives use CRC codes to detect and code Reed-Solomon to correct minor errors in sector readings, and to recover data from "damaged" sectors and store that data in the backup sector. The RAID system uses various error correction techniques to fix errors when the hard drive actually fails. Filesystems such as ZFS or Btrfs, as well as some RAID implementations, support data scrubbing and resilvering, allowing bad blocks to be detected and (hopefully) recovered before use. Recovered data can be rewritten to the exact same physical location, to store blocks elsewhere on the same hardware, or data can be rewritten to the replacement hardware.

Error correction memory

DRAM memory can provide increased protection against soft errors by relying on error correction codes. Such error correction memory, known as ECC or EDAC-protected memory, is highly desirable for high-fault tolerance applications, such as servers, and in-house applications due to increased radiation.

Error-correcting memory controllers traditionally use Hamming codes, although some use modular redundancy of three.

Interleaving enables the spreading effect of a single cosmic ray to potentially disturb several adjacent bits physically in some words by associating neighboring bits with different words. During a single annoyed event (SEU) does not exceed the error threshold (eg, one error) in certain words between accesses, it can be corrected (for example, with single bit error correction codes), and the illusion of an error-free memory system can be maintained.

In addition to the hardware that provides the features necessary for ECC memory to operate, the operating system usually contains related reporting facilities that are used to provide notification when the soft error is recovered transparently. Increasing the level of soft error may indicate that the DIMM module needs to be replaced, and that feedback information will not be readily available without corresponding reporting capabilities. One example is the Linux EDAC subsystem (formerly known as bluesmoke ), which collects data from components that enable error checking in the computer system; in addition to collecting and reporting back events related to ECC memory, it also supports other checksumming errors, including those detected on the PCI bus.

Some systems also support memory rubbing.

Cyclic Redundancy Check(CRC) in Hindi for Error Detection and ...
src: i.ytimg.com


See also

  • Berger code
  • Incorrect error correction code
  • Adaptation links
  • List of algorithms for error detection and correction
  • List of error correction codes
  • List of hash functions
  • Reliability (computer network)

9장 Error Detection and Correction - ppt download
src: slideplayer.com


References


Hamming Code for Error Detection and Correction with Example in ...
src: i.ytimg.com


Further reading

  • Shu Lin; Daniel J. Costello, Jr. (1983). Coding Control Error: Basics and Applications . Prentice Hall. ISBN: 0-13-283796-X.

UNIT-5 CHANNEL CODING. - ppt download
src: slideplayer.com


External links

  • On-line textbooks: Information Theory, Inference, and Algorithm of Learning, by David J.C. MacKay, contains a chapter on basic error-correcting codes; on the theoretical limit of error correction; and the latest state-of-the-art error codes, including low-density parity check codes, turbo codes, and fountain codes.
  • Calculate linear code parameters - the on-line interface to generate and calculate parameters (eg minimum distance, reach radius) of the linear error correction code.
  • ECC page
  • SoftECC: A System for Integrating the Integrity of Software Memory
  • A, DRAM Error Detection and Soft Correction Library for HPC
  • Detection and Correction of Silent Data Corruption for Large-Scale Performance Computing

Source of the article : Wikipedia

Comments
0 Comments