The UGC NET Computer Science (Computer System Architecture)
Introduction
In this article, you will be able to know
and learn about some important terminologies, and MCQs that are frequently
asked in the UGC NET Computer Science
(Computer System Architecture).
Computer
System Architecture
· Computer System
Architecture is the organization of the components which make up a computer
system and the meaning of the operations which guide its function.
· Computer System
Architecture is a set of interconnected building blocks, each of which performs
a particular function in the system.
· Computer System
Architecture is the conceptual design and fundamental operational structure of
a computer system.
· Computer System
Architecture defines what is seen on the machine interface, which is targeted
by programming languages and their compilers.
· Computer System
Architecture includes a CPU, a memory unit, and a GPU (Graphical Processing Unit).
In a computer system, data transfer takes
place between processor registers and memory and between processor registers
and the input-output systems.
General
register organization
· General register
organization is the processor architecture that stores and manipulates data for
computations.
· The main
components of a register organization include registers, memory, and
instructions.
Register
· A set of
flip-flops forms a register.
· A register is a
small, high-speed storage area within a computer’s processor or CPU.
· A register is the
most elementary data-storing device that is implemented onto the processor chip
itself.
· Registers hold
operands or instructions that the CPU would be currently processing.
· The processor can
directly access the data stored in registers.
· Registers allow
the processor to quickly access and manipulate the stored information.
· Registers can be
controlled.
Register
transfer
Register transfer refers to the
availability of hardware logic circuits that can perform a given micro-operation and transfer the result of the operation to the same or another
register.
Register
transfer micro-operations
Register transfer micro-operations transfer
binary information from one register to another.
Micro-operations
· Micro-operations are also known as micro-ops.
· Micro-operations refer to the operations made by the CPU to fetch values (operands) stored in memory (registers) and execute instructions.
· A Micro-operation is an elementary operation performed on the information stored in one or more registers in the CPU.
· Some examples of Micro-operations are shift, count, clear, and load.
Arithmetic
micro-operations
Arithmetic micro-operations perform arithmetic
operations on numeric data stored in registers.
Logical
micro-operations
Logical micro-operations perform bit
manipulation operations on data stored in registers.
Shift
Micro-operations
Shift Micro-operations are used for the
serial transfer of data and also support in conjunction with arithmetic, logic, and several data-processing operations.
The
Register transfer language
The Register transfer language is the
symbolic representation of notations used to specify the sequence of
micro-operations.
Memory
· Memory is a data
storage device used to store data, instructions, and computer programs.
· Memory holds instructions
and the data about the currently executing program required by the CPU.
· Memory is
typically larger and more long-term in nature.
· Memory can’t be controlled.
The
unit of transfer in memory
The unit of transfer in memory is the
number of bits read out of or written into memory at a time.
Block
of data
· A block is also
called a physical record.
· A block is a
sequence of bytes or bits, usually containing some whole number of records,
having a maximum length; a block size.
· A data block is
the smallest data unit used by a database.
· Data are often
transferred in many large units than a word, which is referred to as
blocks.
Memory
transfer
· Memory transfer
means the transfer of data from memory to the external environment or the transfer
of new data into the memory.
· Operations of
Memory transfer are Read, Write, and both.
Write operation
· Write operation
refers to transferring new information stored in the memory.
· The memory
transfer in the write operation is the transfer of data from the memory buffer
register (MBR) to the address register (AR) with the chosen word M for the
memory. (A memory word is symbolized by M).
Read
operation
Read operation refers to transferring
information from a memory unit to the user.
Main
memory
· Main memory holds
instructions and data when a program is executing.
· Objective of the main
memory is to store data and applications that are currently in use.
· Main memory is
the primary, internal workspace in the computer.
· RAM is the main
memory of a computer.
· The operating
system controls the usage of main memory.
Associative
memory
· Associative
memory is also known as content addressable memory
· (CAM) or
associative storage or associative array.
· Two types of
associative memory in computers are auto-associative and hetero-associative.
· Associative
memory is optimized for performing searches through data.
Auxiliary
memory
· Auxiliary memory
holds data and programs not currently in use and provides long-term storage.
· Auxiliary memory
holds data and programs for future use.
· Auxiliary memory
is used to store inactive programs and to archive data.
· Auxiliary memory
is non-volatile.
Memory
hierarchy
· Memory hierarchy
refers to arranging different kinds of storage on a computing device
based on the speed of access.
· At the top,
the highest-performing storage is CPU registers which are the fastest to read
and write to.
· Memory hierarchy
is register memory, cache memory, main memory, magnetic disks, and magnetic
tapes.
Cache
hierarchy
Cache hierarchy or multi-level caches
refers to a memory architecture that uses a hierarchy of memory stores based on
varying access speeds to cache data.
Bus
· In Computer
System Architecture, a Bus is a communication system that transfers data
between components inside a computer or between computers. In a digital system
of registers, a path must be provided to move information.
· A bus is a high-speed internal connection.
· A bus is made up
of a collection of common lines, one for each bit of a register, that are used
to transfer binary data one by one.
· Two methods in
bus transfer are Bus transfer using Multiplexer, and Bus transfer using three
states bus buffer.
· Buses are used to
send control signals and data between the processors and other components.
· Three types of
buses are address bus, data bus, and control bus.
Address
bus
Address bus carries memory addresses from
the processor to other components such as primary storage and input/output
devices.
Data
bus
The data bus carries the data between the processor
and other components. The data bus is bidirectional.
Control
bus
· Control bus carries
control signals from the processor to other components. The control bus also carries
the clock’s pulses.
· The control bus
is unidirectional.
Interconnection structure
· Interconnection
structure refers to the collection of paths connecting the various modules.
· Five types of
interconnection structures are (a) time-shared common bus (b) multiport memory
(c) crossbar switch (d) multistage switching (e) Hybercube system.
Time-shared
common bus
In any multiprocessor system, the
time-shared common bus interconnection structures provide a common
communication path by connecting all the functional units like I/O, processor,
and memory units.
Basic computer organization
· Computer
organization is concerned with the structure and behavior of a computer system
as seen by the user.
· The basic organization
of a computer system is the processing unit, memory unit, and input-output
devices.
· Five basic
components of computer organization are CPU, memory unit, control unit, and
ALU.
Processing unit
The processing unit controls all the
functions of the computer system.
Stored
program organization
· The simplest way
to organize a computer is to have one processor register and an instruction
code format with two parts. The first part specifies the operation to be
performed and the second specifies an address.
· Some common
examples of stored programs are RAM and ROM.
Computer
instruction
· Computer
instruction is an order given to a computer processor by a computer program.
· At the lowest
level, each instruction is a sequence of 0s and 1s that describes a physical
operation the computer is to perform.
Instruction
code
An instruction code is a group of bits instructing the computer to perform a specific operation.
Operation
Code of an Instruction
· The Operation
code of an Instruction is a group of bits that define operations such as
addition, subtraction, shift, and complement.
· The three different types of Instruction codes are memory reference instruction, register reference instruction and input-output instruction.
· Each Instruction
code is 16-bit and consists of three parts called fields, which include the
mode, the op-code, and the address.
Memory
reference instruction
Memory reference instruction uses 12 bits
to specify the address and 1 bit to specify the addressing mode.
Register
Reference Instruction
· Register
Reference Instructions are recognized by the opcode 111 with a 0 in the leftmost bit of instruction.
· A Register
Reference Instruction specifies an operation on or a test of the AC (Accumulator)
register.
Programming
language
· A programming
language refers to a system of notation for writing computer programs.
· Programming
languages are text-based formal languages, but they may also be graphical.
· Programming
languages are used to write all computer programs and computer software.
· Some common
examples of programming languages are Python, Ruby, Java, JavaScript, C, C++,
and C#.
Machine
language
· Machine language
is also known as machine code or object code in which instructions are executed
directly by the CPU.
· Machine language
is the language understood by a computer.
· Machine language
is a low-level programming language made out of binary numbers or bits that can
only be read by machines.
· All programs and
programming languages eventually generate or run programs in machine language.
Assembly
language
· Assembly language
is a type of low-level programming language that is intended to communicate
directly with a computer’s hardware.
· An Assembly
language consists mostly of symbolic equivalents of a particular computer’s
machine language.
· An Assembly
language allows a software developer to code using words and expressions that
can be easier to understand and interpret.
· Assembly languages
are designed to be readable by humans.
High-level
language
· High-level
language is any programming language that enables the development of a program in a
much more user-friendly programming context.
· High-level
language is generally independent of the computer’s hardware architecture.
· A High-level
language has a significant abstraction from the details of computer operation.
· A High-level
language does not require addressing hardware constraints when developing a
program.
· Every program written in a high-level language must be interpreted into machine language
before being executed by the computers.
· Some common
examples of High-level languages are BASIC, C/C++, and Java.
Assembler
· An assembler is a
program that takes basic computer instructions and converts them into a pattern
of bits that the computer’s processor can use to perform its basic operations.
· An assembler
completes the task in two passes. (a) It produces a machine code in form of
mnemonics. (b) It produces binary code in form of 0’s and 1’s.
· Some common
examples of assemblers are Java, C, and C++.
Multiprocessor
system
· A Multiprocessor
system is also known as a strongly connected system.
· A Multiprocessor
system refers to a system with more than one processor or multiple processors.
· A Multiprocessor
system contains a number of CPUs linked together to enable parallel processing
to take place.
· A Multiprocessor
system is used to boost a system’s execution speed.
· Multiprocessors have
the advantage of higher throughput, increased dependability, and economies of
scale.
Main
characteristics of Multiprocessor
Parallel
computing
Parallel computing involves the
simultaneous application of multiple processors.
Distributed
computing
Distributed computing involves the usage
of a network of processors. Each processor in this network can be considered a computer in its own right and have the capability to solve problems. These processors
are heterogeneous, and generally, one task is allocated to a single processor.
Supercomputing
· Supercomputing involves
the usage of the fastest machines to resolve big and computationally complex
problems.
· Supercomputing
machines were vector computers, but at present, vector or parallel computing is
accepted by most people.
Vector
computing
Vector computing involves the usage of vector processors wherein operations such as “multiplication” are divided into many steps and is then applied to a stream of operands (“vectors”).
Pipelining
·
Pipelining is a process of
arrangement of hardware elements of the CPU such that its overall performance
is increased.
·
Pipelining is a method wherein a
specific task is divided into several subtasks that must be performed in a
sequence.
·
Pipelining is a technique where
multiple instructions are overlapped during execution. Simultaneous execution
of more than one instruction takes place in a pipelined processor.
·
Pipelining process happens in a
continuous, orderly, and overlapped manner.
·
Pipelining allows storing,
prioritizing, managing, and executing tasks and instructions in an orderly process.
·
Pipelining in 8086 processor, fetching
the next instruction while executing the current instruction is called
pipelining.
Pipeline stages are
1.
Instruction fetch
2.
Instruction decode
3. Instruction Execute
4. Memory Access
5. Write back
Instruction fetch
·
In case of instruction fetch - the
CPU reads instructions from the address in the memory whose value is present in
the program counter.
Memory access
·
In the case of memory access memory
operands are read and written from the memory present in the
instruction.
·
Efficiency = S/Smax =
Given speed up/ Max speed up
·
Throughput = No. of instructions /
Total time to complete the instruction
Arithmetic
pipeline
· Arithmetic
pipeline separates a given arithmetic problem into sub-problems that can be
executed in different pipeline segments.
· An Arithmetic
pipeline is used for multiplication, floating-point operations, and other
calculations.
Control
unit (CU) design
· A CU is circuitry
within a computer’s processor that directs operations.
· CU instructs the
memory, logic unit, and both output and input devices of the computer on how to
respond to the program's instructions.
· A control memory
is a part of the CU. Any computer that involves micro-programmed control
consists of two memories. They are the main memory and the control memory.
Programs are usually stored in the main memory by the users.
· CU is used by
CPUs and GPUs.
Interprocessor
Communication (IPC)
· IPC is the
mechanism provided by the OS that allows processes to communicate with each
other.
· IPC allows you to
move data between loosely coupled processors using the multi-processor
interconnect facility (MPIF) and channel-to-channel (CTC) communication links.
· IPC is used for
programs to communicate data to each other and to synchronize their activities.
· Some common
methods of IPC are semaphores, shared memory, and internal message queues.
· Some common
examples of IPC are POSIX uses the shared memory technique, Windows XP uses the message passing technique, and Mac uses the message passing technique.
Interprocessor
arbitration
Interprocessor arbitration refers to the
mechanism to solve the dispute in the system bus such that only one of the
mentioned components can access it.
Interprocess
synchronization
Interprocess synchronization
communication refers to the exchange of data between different processes.
Synchronization
· Synchronization
refers to the special case where the data used to communicate between
processors is control information.
· Synchronization
is an essential part of interprocess communication.
Stack
· The Stack is a
collection of memory locations containing a register that stores the
top-of-element address in digital computers.
· Stack is a data
storage structure in which the most recent thing deposited is the most recent
item retrieved.
· Stack is based on
the LIFO concept.
· A Stack is a
memory unit with an address register. This register influences the address for
the stack, which is known as Stack Pointer (SP). There are two types of stacks
(a) the register Stack and (b) the memory stack.
· The two
operations of a Stack are the insertion and deletion of items.
· Nothing is pushed
or popped in a computer stack.
· The five basic
operations of a Stack are (a) push (b) pop (c) is Empty (d) isFull (e) top.
Push Stack is inserting an item into a Stack. Pop Stack deletes an item from
the Stack. Peek or top displays the contents of the Stack. The “top” element of
the Stack is the element that was last pushed and will be the first to be
popped. The “bottom” element of the Stack is the element that, when removed,
will leave the Stack empty.
· Some common
examples of a Stack are a pile of books, a Stack of dinner plates, and a box of
Pringles potato chips.
Instruction
formats
· Instruction formats
refer to a sequence of bits in a machine instruction that defines the layout of
the instruction.
· The instruction
format is a pattern of bits that the control unit of the CPU can decode.
· Instruction in
computers comprises groups called fields.
Vector
processing
· Vector processing
is the process of using vectors to store a large number of variables for
high-intensity data processing.
· In Vector
processing, the program size is small as it requires less number of instructions.
· A Vector
processor acts on several pieces of data with a single instruction.
· A Vector
processor can improve performance on certain workloads and numerical
simulation.
· Some common
examples of vector processing are weather forecasting, human genome mapping, and
GIS data.
No comments:
Post a Comment