Computer Science - Gtime Trix

For your sʞɔ!rʇ

Facebook

test

Monday 4 December 2017

Computer Science



0 1 0 1 0 0 1
----------------------------
0 0 1 1 1 1 0 1 1 0 = 246(base 10)
00010111 × 00000011 = 01000101
0 0 0 1 0 1 1 1 = 23(base 10)
× 0 0 0 0 0 0 1 1 = 3(base 10)
----------------------
1 1 1 1 1 carries
0 0 1 0 1 1 1
0 0 1 0 1 1 1
0 0 1 0 0 0 1 0 1 = 69(base 10)
Binary division follow the same rules as in decimal division.
Figure
1.8
.
Logical operations on Binary Numbers
Logical Operation with one or two bits
NOT : Changes the value of a single bit. If it is a "1", the result is "0"; if it is a "0", the result is
"1".
AND: Compares 2 bits and if they are both "1", then the result is "1", otherwise, the result is "0".
OR : Compares 2 bits and if either or both bits are "1", then the result is "1", otherwise, the result
is "0".
XOR : Compares 2 bits and if exactly one of them is "1" (i.e., if they are different values), then the
result is "1"; otherwise (if the bits are the same), the result is "0".
Logical operators between two bits have the following truth table
Table
1.2.
x
y
x
AND
y
x
OR
y
x
XOR
y
1
1
1
1
0
1
0
0
1
1
0
1
0
1
1
0
0
0
0
0
Logical Operation with one or two binary numbers
A logical (bitwise) operation operates on one or two bit patterns or binary numerals at the level of
their individual bits.
Example
NOT 0111
= 1000
AND operation
An AND operation takes two binary representations of equal length and performs the logical AND
operation on each pair of corresponding bits. In each pair, the result is 1 if the first bit is 1 AND
the second bit is 1. Otherwise, the result is 0.
Example
0101
AND 0011
= 0001
OR operation
An OR operation takes two bit patterns of equal length, and produces another one of the same
length by matching up corresponding bits (the first of each; the second of each; and so on) and
performing the logical OR operation on each pair of corresponding bits.
Example
0101
OR 0011
= 0111
XOR Operation
An exclusive or operation takes two bit patterns of equal length and performs the logical XOR
operation on each pair of corresponding bits.
Example
0101
XOR 0011
= 0110
Symbol Representation
Basic Principles
It is important to handle character data. Character data is not just alphabetic characters, but also
numeric characters, punctuation, spaces, etc. They need to be represented in binary.
There aren't mathematical properties for character data, so assigning binary codes for characters is
somewhat arbitrary.
ASCII Code Table
ASCII stands for American Standard Code for Information Interchange. The ASCII standard was
developed in 1963, permitted machines from different manufacturers to exchange data.
ASCII code table consists of 128 binary values (0 to 127), each associated with a character or
command. The non-printing characters are used to control peripherals such as printer.
Figure
1.9
.
ASCII coding table
The extended ASCII character set also consists 128 128 characters representing additional special,
mathematical, graphic and foreign characters.
Figure
1.10
.
The extended ASCII characters
Unicode Code Table
There are some problems with the ASCII code table. With ASCII character set, string datatypes
allocated one byte per character. But logographic languages such as Chinese, Japanese, and
Korean need far more than 256 characters for reasonable representation. Even Vietnamese, a
language uses almost Latin letters, need 61 characters for representation. Where can we find
numbers for our characters? is it a solution : 2 bytes per character?
Hundreds of different encoding systems were invented. But these encoding systems conflict with
one another : two encodings can use the same number for two different characters, or use different
numbers for the same character.
The Unicode standard was first published in 1991. With two bytes for each character, it can
represent 216-1 different characters.
(1.9)
(1.10)
The Unicode standard has been adopted by such industry leaders as HP, IBM, Microsoft, Oracle,
Sun, and many others. It is supported in many operating systems, all modern browsers, and many
other products.
The obvious advantages of using Unicode are :
To offer significant cost savings over the use of legacy character sets.
To enable a single software product or a single website to be targeted across multiple
platforms, languages and countries without re-engineering.
To allow data to be transported through many different systems without corruption.
Representation of Real Numbers
Basic Principles
No human system of numeration can give a unique representation to real numbers. If you give the
first few decimal places of a real number, you are giving an approximation to it.
Mathematicians may think of one approach : a real number x can be approximated by any number
in the range from x - epsilon to x + epsilon. It is fixed-point representation. Fixed-point
representations are unsatisfactory for most applications involving real numbers.
Scientists or engineers will probably use scientific notation: a number is expressed as the product
of a mantissa and some power of ten.
A system of numeration for real numbers will typically store the same three data -- a sign, a
mantissa, and an exponent -- into an allocated region of storage
The analogues of scientific notation in computer are described as floating-point representations.
In the decimal system, the decimal point indicates the start of negative powers of 10.
12.34=1∗
10
1
+2∗
10
0
+3∗
10
−1
+4∗
10
−2
If we are using a system in base k (ie the radix is k), the ‘radix point’ serves the same function:
A floating point representation allows a large range of numbers to be represented in a relatively
small number of digits by separating the digits used for precision from the digits used for range.
(1.11)
To avoid multiple representations of the same number floating point numbers are usually
normalized so that there is only one nonzero digit to the left of the ‘radix’ point, called the leading
digit.
A normalized (non-zero) floating-point number will be represented using
(−1)
s
d
0
·
d
1
d
2
...
d
p
−1
×
b
e
where
s is the sign,
d
0
·
d
1
d
2
...
d
p
−1
- termed the significand - has p significant digits, each digit satisfies 0
d
i
<b
e
,
e
min
e
e
max
, is the exponent
b is the base (or radix)
Example
If k = 10 (base 10) and p = 3, the number 0·1 is represented as 0.100
If k = 2 (base 2) and p = 24, the decimal number 0·1 cannot be represented exactly but is
approximately
1·10011001100110011001101×2
−4
Formally,
(−1)
s
d
0
·
d
1
d
2
...
d
p
−1
be represents the value
(−1)
s
(
d
0
+
d
1
b
−1
+
d
2
b
−2
...
d
−1
b
(
p
−1)
)
b
e
In brief, a normalized representation of a real number consist of
The range of the number : the number of digits in the exponent (i.e. by
e
max
) and the base b to
which it is raised
The precision : the number of digits p in the significand and its base b
IEEE 754/85 Standard
There are many ways to represent floating point numbers. In order to improve portability most
computers use the IEEE 754 floating point standard.
There are two primary formats:
32 bit single precision.
64 bit double precision.
Single precision consists of:
A single sign bit, 0 for positive and 1 for negative;
An 8 bit base-2 (b = 2) excess-127 exponent, with
e
min
= –126 (stored as
127
(10)
126
(10)
=1=
00000001
(2)
) and
e
max
= 127 (stored as
127
(10)
+
127
(10)
=
254
(10)
=
11111110
(2)
).
a 23 bit base-2 (k=2) significand, with a hidden bit giving a precision of 24 bits (i.e.
1.
d
1
d
2
...
d
23
)
Figure
1.11
.
Single precision memory format
Notes
Single precision has 24 bits precision, equivalent to about 7.2 decimal digits.
The largest representable non-infinite number is almost
2×2
127
≅3.402823×
10
38
The smallest representable non-zero normalized number is
1×2
−127
≅1.17549×
10
−38
Denormalized numbers (eg
0.01×2
−126
) can be represented.
There are two zeros,
±
0.
There are two infinities,
±∞
.
A NaN (not a number) is used for results from undefined operations
Double precision floating point standard requires a 64 bit word
The first bit is the sign bit
The next eleven bits are the exponent bits
The final 52 bits are the fraction
Range of double numbers : [±2.225×10−308÷±1.7977×10308]
Figure
1.12
.
Double precision memory format
1.3
.
Computer Systems
*
A computer is an electronic device that performs calculations on data, presenting the results to
humans or other computers in a variety of (hopefully useful) ways. The computer system includes
not only the hardware, but also software that are necessary to make the computer function.
Computer hardware is the physical part of a computer, including the digital circuitry, as
distinguished from the computer software that executes within the hardware.
Computer software is a general term used to describe a collection of computer programs,
procedures and documentation that perform some task on a computer system
Computer Organization
General Model of a Computer
A computer performs basically five major operations or functions irrespective of their size and
make.
1. Input:
This is the process of entering data and programs in to the computer system. You should
know that computer is an electronic machine like any other machine which takes as inputs raw
data and performs some processing giving out processed data. Therefore, the input unit takes data
from us to the computer in an organized manner for processing.
2. Storage
: The process of saving data and instructions permanently is known as storage. Data has
to be fed into the system before the actual processing starts. It is because the processing speed of
Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same




speed. Therefore the data is first stored in the storage unit for faster access and processing. This
storage unit or the primary storage of the computer system is designed to do the above
functionality. It provides space for storing data and instructions.
The storage unit performs the following major functions:
- All data and instructions are stored here before and after processing.
- Intermediate results of processing are also stored here.
3. Processing
: The task of performing operations like arithmetic and logical operations is called
processing. The Central Processing Unit (CPU) takes data and instructions from the storage unit
and makes all sorts of calculations based on the instructions given and the type of data provided. It
is then sent back to the storage unit.
4. Output
: This is the process of producing results from the data for getting useful information.
Similarly the output produced by the computer after processing must also be kept somewhere
inside the computer before being given to you in human readable form. Again the output is also
stored inside the computer for further processing.
5. Control
: The manner how instructions are executed and the above operations are performed.
Controlling of all operations like input, processing and output are performed by control unit. It
takes care of step by step processing of all operations in side the computer.
In order to carry out the operations mentioned above, the computer allocates the task between its
various functional units. The computer system is divided into several units for its operation.
CPU (central processing unit) : The place where decisions are made, computations are
performed, and input/output requests are delegated
Memory: stores information being processed by the CPU
Input devices : allows people to supply information to computers
Output devices : allows people to receive information from computers
Buses : a bus is a subsystem that transfers data or power between computer components inside
a computer.


By Gtime.

No comments: