Powered By Blogger

Monday, February 7, 2011

Understanding Electronic Communication


We normally think of communication as an act that involves people and activities like talking, writing, and reading. In this book, though, we are interested also with communication between computers and between people and computers. Communicating is the act of giving, transmitting, or exchanging information. In this chapter, we discuss how a computer processes data and communicates (transmits information) with its user. Understanding this process is fundamental to understanding how computers work.


Computer Communication

In this lesson, we examine the fundamentals of electronic communication and explore how computer communication differs from human communication.

After this lesson, you will be able to
  • Understand how a computer transmits and receives information
  • Explain the principles of computer language

Early Forms of Communication

Humans communicate primarily through words, both spoken and written. From ancient times until about 150 years ago, messages were either verbal or written in form. Getting a message to a distant recipient was often slow, and sometimes the message (or the messenger) got lost in the process.
As time and technology progressed, people developed devices to help them communicate faster over greater distances. Items such as lanterns, mirrors, and flags were used to send messages quickly over an extended visual range.
All "out of earshot" communications have one thing in common: They require some type of "code" to convert human language to a form of information that can be packaged and sent to the remote location. It might be a set of letters in an alphabet, a series of analog pulses over a telephone line, or a sequence of binary numbers in a computer. On the receiving end, this code needs to be converted back to language that people can understand.
Obstacles to effective communications include differences in languages and in how the speaker and listener give meaning to words. Language between people is made up of more than words. Gestures, emphasis, body language, and social concepts have an impact on how we interpret interpersonal communications. Most of these elements have no bearing on human–machine interactions. There are other issues we must understand to be able deal and interact with computers.

Dots and Dashes, Bits and Bytes

Telegraphs and early radio communication used codes for transmissions. The most common, Morse code (named after its creator, Samuel F. B. Morse), is based on assigning a series of pulses to represent each letter of the alphabet. These pulses are sent over a wire in a series. The operator on the receiving end converts the code back into letters and words. Morse code remained in official use for messages at sea almost until the end of the twentieth century—it was officially retired in late 1999.
Morse used a code in which any single transmitted value had two possible states: a dot or a dash. By combining the dots and dashes into groups, an operator was able to represent letters and, by stringing them together, words. That form of on–off notation can also be used to provide two numbers, 0 and 1. The value 0 represents no signal, or off, and the value 1 represents a signal, or on, state.
This type of number language is called binary notation because it uses only two digits, usually 0 and 1. It was first used by the ancient Chinese, who used the terms yin (empty) and yang (full) to build complex philosophical models of how the universe works.
Our computers are complex switch boxes that have two states and use a binary scheme. The value of a given switch's state—on or off—represents a value that can be used as a code. Modern computer technology uses terms other than yin and yang, but the same binary mathematics creates virtual worlds inside our modern machines.

The Binary Language of Computers

The binary math terms that follow are fundamental to understanding PC technology.

Bits

A bit is the smallest unit of information that is recognized by a computer: a single on or off event.

Bytes

A byte is a group of 8 bits. A byte is required to represent one character of information. Pressing one key on a keyboard is equivalent to sending 1 byte of information to the computer's central processing unit (CPU). A byte is the standard unit by which memory is measured in a computer—values are expressed in terms of kilobytes (KB) or megabytes (MB). The table that follows lists units of computer memory and their values.
Memory Unit
Value
Bit
Smallest unit of information; shorthand term for binary digit
Nibble
4 bits (half of a byte)
Byte
8 bits (equal to one character)
Word
16 bits on most personal computers (longer words possible on larger computers)
Kilobyte (KB)
1024 bytes
Megabyte (MB)
1,048,576 bytes (approximately 1 million bytes or 1024 KB)
Gigabyte (GB)
1,073,741,824 bytes (approximately 1 billion bytes or 1024 MB)

The Binary System

The binary system of numbers uses the base of 2 (0 and 1). As described earlier, a bit can exist in only two states, on or off. When bits are represented visually:
  • 0 equals off.
  • 1 equals on.
The following is 1 byte of information in which all 8 bits are set to 0. In the binary system, this sequence of eight 0s represents a single character—the number 0.
0     0     0     0     0     0     0     0
The binary system is one of several numerical systems that can be used for counting. It is similar to the decimal system, which we use to calculate everyday numbers and values. The prefix dec in the term decimal system comes from the Latin word for 10 and denotes a base of 10, which means the decimal system is based on the 10 numbers 0 through 9. The binary system has a base of 2, the numbers 0 and 1.

Counting in Binary Notation

There are some similarities in counting with binary notation and the decimal system we all learned in grade school. In the decimal system, the rightmost whole number (the number to the left of the decimal point) is the "digits" column. Numbers written there have a value of 0 to 9. The number to the left of the digits column (if present) is valued from 10 to 90—the "10s" column. The factor of each additional row is 10 in the decimal system of notation. To get the total value of a number, we add together all columns in both systems: 111 is the sum of 100 + 10 + 1.
NOTE

A factor is an item that is multiplied in a multiplication problem. For example, 2 and 3 are factors in the problem 2 × 3.
Binary notation uses the same system of right-to-left columns of ascending values, but each row has only two (0 or 1) instead of 10 (0–9) possible values.Thus, in the binary system, the first row to the right can be only 0 or 1; the next row to the left can be 2 or 3 (if a number exists in that position). The columns that follow have values of 4, then 8, then 16, and so on, each column doubling the possible value of the one to its right. The factor used in the binary system is 2, and—just as in the decimal system—0 is a number counted in that tally. Examples of bytes of information (eight rows) follow.

Byte—Example A

The value of this byte is 0 because all bits are off (0 = off).
0     0     0     0     0     0     0     0     8     bits 
128   64    32    16    8     4     2     1     #     values

Byte—Example B

In this example, two of the bits are turned on (1 = on). The total value of this byte is determined by adding the values associated with the bit positions that are on. This byte represents the number 5 (4 + 1).
0     0     0     0     0     1     0     1     8     bits 
128   64    32    16    8     4     2     1     #     values

Byte—Example C

In this example, two different bits are turned on to represent the number 9 (8 + 1).
0     0     0     0     1     0     0     1     8     bits 
128   64    32    16    8     4     2     1     #     values
The mathematically inclined will quickly realize that 255 is the largest value that can be represented by a single byte. (Keep in mind that we start with 0 and go to 255, which corresponds to a possible 256 places on a number line.)
Because computers use binary numbers and humans use decimal numbers, A+ technicians must be able to perform simple conversions. The following table shows decimal numbers and their binary equivalents (0_9). You will need to know this information. The best way to prepare is to learn how to add in binary numbers rather than merely memorizing the values.
Decimal Number
Binary Equivalent
0
0000
1
0001
2
0010
3
0011
4
0100
5
0101
6
0110
7
0111
8
1000
9
1001
Numbers are fine for calculating, but today's computers must handle text, sound, streaming video, images, and animation as well. To handle all of that, standard codes are used to translate between binary machine language and the type of data being represented and presented to the human user. The binary system is still used to transfer values, but those values have a secondary meaning that is handled by the code. The first common, code-based language was developed to handle text characters and serves as a good example that lets us examine some other core concepts as well.

Parallel and Serial Devices

The telegraph and the individual wires in our PCs are serial devices. This means that only one element of code can be sent at a time. Like a one-lane tunnel, there is only room for one person to pass through at one time. All electronic communications are—at some level—serial, because a single wire can have only two states: on or off.
To speed things up, we can add more wires. This allows simultaneous transmission of signals. Or, to continue our analogy, it's like adding another set of tunnels next to the first one: We still have only one person per tunnel, but we can get more people through because they are traveling in parallel. That is the difference between parallel and serial data transmission. In PC technology, we often string eight wires in a parallel set, allowing 8 bits to be sent at once. Figure 2.1 illustrates serial and parallel communication.

Figure 2.1 Serial and parallel communication

ASCII Code

The standard code for handling text characters on most modern computers is called ASCII (American Standard Code for Information Interchange). The basic ASCII standard consists of 128 codes representing the English alphabet, punctuation, and certain control characters. Most systems today recognize 256 codes: the original 128 and an additional 128 codes called the extended character set.
Remember that a byte represents one character of information; 4 bytes are needed to represent a string of four characters. The following 4 bytes represent the text string 12AB (using ASCII code):
00110001     00110010     01000001     01000010 
1            2            A            B
The following illustrates how the binary language spells the word binary:
B            I            N            A            R            Y 
01000010     01001001     01001110     01000001     01010010     01011001
NOTE

It is very important to understand that in computer processing, the "space" is a significant character. All items in a code must be set out for the machine to process. Like any other character, the space has a binary value that must be included in the data stream. In computing, the absence or presence of a space is critical and sometimes causes confusion or frustration among new users. Uppercase and lowercase letters also have different values. Some operating systems (for example, UNIX) distinguish between them for commands, whereas others (for example, MS-DOS) translate the uppercase and lowercase into the same word no matter how it is cased.
The following table is a partial representation of the ASCII character set. Even in present-day computing, laden with multimedia and sophisticated programming, ASCII retains an honored and important position.
Symbol
Binary 1 Byte
Decimal
Symbol
Binary 1 Byte
Decimal
0
00110000
48
V
01010110
86
1
00110001
49
W
01010111
87
2
00110010
50
X
01011000
88
3
00110011
51
Y
01011001
89
4
00110100
52
Z
01011010
90
5
00110101
53
a
01100001
97
6
00110110
54
b
01100010
98
7
00110111
55
c
01100011
99
8
00111000
56
d
01100100
100
9
00111001
57
e
01100101
101
A
01000001
65
f
01100110
102
B
01000010
66
g
01100111
103
C
01000011
67
h
01101000
104
D
01000100
68
i
01101001
105
E
01000101
69
j
01101010
106
F
01000110
70
k
01101011
107
G
01000111
71
l
01101100
108
H
01001000
72
m
01101101
109
I
01001001
73
n
01101110
110
J
01001010
74
o
01101111
111
K
01001011
75
p
01110000
112
L
01001100
76
q
01110001
113
M
01001101
77
r
01110010
114
N
01001110
78
s
01110011
115
O
01001111
79
t
01110100
116
P
01010000
80
u
01110101
117
Q
01010001
81
v
01110110
118
R
01010010
82
w
01110111
119
S
01010011
83
x
01111000
120
T
01010100
84
y
01111001
121
U
01010101
85
z
01111010
122
NOTE

All letters have a separate ASCII value for uppercase and lowercase. The capital letter "A" is 65, and the lowercase "a" is 97.
Keep in mind that computers are machines, and they do not really "perceive" numbers as anything other than electrical charges setting a switch on or off. Like binary numbers, electrical charges can exist in only two states—positive or negative. Computers interpret the presence of a charge as 1 and the absence of a charge as 0. This technology allows a computer to process information.

Lesson Summary

The following points summarize the main elements of this lesson:
  • Computers communicate using binary language.
  • A bit is the smallest unit of information that is recognized by a computer.
  • ASCII is the standard code that handles text characters for computers.

No comments:

Post a Comment