Computer Siksha

Class 11 Computer System Notes

Table of Contents

Introduction

Understanding how computers work internally is essential for every student studying computer science. These Class 11 Computer System Notes explain the fundamental concepts of computer organization, including hardware, software, input and output devices, CPU, and different types of memory such as primary, cache, and secondary memory. Students will also learn about operating systems, types of software, Boolean logic, number systems, and encoding schemes like ASCII and Unicode. These Class 11 Computer System Notes are carefully designed to make complex topics easy to understand and are extremely helpful for exam preparation. They are important for all Class 11 students from CBSE, ICSE, other central boards, and various state boards who want a strong foundation in computer science.

Class 11 Computer System Notes

Computer Systems and Organisation – Class 11 Complete Guide
Class 11 Computer Science · Chapter 1

Computer Systems and Organisation: A Complete Guide

From hardware fundamentals to Boolean logic, number systems, and encoding — everything clearly explained for Class 11.

3000+Words
7Topics
10 minRead
Class 11Level

Introduction

Computers power everything around us — from smartphones to satellites. However, most people use computers every day without truly understanding how they work on the inside. This chapter changes that. In addition to giving you essential technical knowledge, it builds the solid foundation you need for every advanced topic in Computer Science. Therefore, let us start from the very beginning and understand exactly how a computer system is organised, what software does, and how computers process and store information at the lowest level.

1. Basic Computer Organisation: Hardware

A computer system is an integrated combination of hardware and software that together perform data processing tasks. To understand how it works, you first need to examine its key physical components. Moreover, every computer — regardless of its size or cost — follows the same fundamental structure: Input → Process → Output → Storage. Consequently, understanding this flow is the first step toward mastering computer organisation.

Input Devices: Feeding Data into the Computer

Input devices allow users to send raw data into the computer system. Furthermore, they translate human-readable input into machine-readable electrical signals. Common examples include the keyboard, mouse, scanner, microphone, webcam, touchscreen, and barcode reader. For instance, when you type on a keyboard, each keypress sends a unique electrical signal that the CPU then decodes into a corresponding character. As a result, the computer receives meaningful data that it can process and act upon.

Output Devices: Presenting the Results

Output devices present the processed results back to the user in a human-understandable form. Additionally, they convert digital signals into visual, audio, or physical formats. Common output devices include the monitor, printer, speaker, projector, and plotter. Consequently, a printer converts digital text into physical ink marks on paper, while a speaker converts electrical audio signals into sound waves. Therefore, without output devices, users would have no way to receive or interpret the results of any computation.

Central Processing Unit (CPU): The Brain of the Computer

The CPU is the most important hardware component in any computer system. It carries out all arithmetic, logic, control, and input/output operations as specified by program instructions. Furthermore, the CPU consists of three main sub-components that work together seamlessly to execute every instruction:

ComponentFull FormFunction
ALUArithmetic Logic UnitPerforms all arithmetic (+, −, ×, ÷) and logical (AND, OR, NOT) operations
CUControl UnitDirects and coordinates the operations of all other computer components
RegistersTiny, ultra-fast storage locations inside the CPU that hold data currently being processed

How the CPU Executes Instructions

Whenever you open an application or perform a calculation, the CPU follows a structured process called the Fetch-Decode-Execute cycle. First, the Control Unit fetches the instruction from memory. Next, it decodes the instruction to determine what action is required. Finally, the ALU executes the operation and writes the result back to memory. Therefore, every action on your computer ultimately reduces to this cycle repeating billions of times per second.

2. Memory: Types and Units

Memory stores data and instructions both temporarily and permanently. Without memory, a computer cannot retain any information at all. Accordingly, computer memory is classified into three main types based on speed, capacity, and proximity to the CPU. Understanding these distinctions is essential for grasping how computers manage information efficiently.

Primary Memory: Fast and Directly Accessible

Primary memory is directly accessible by the CPU and stores the data and programs that are currently in use. There are two important types. First, RAM (Random Access Memory) is volatile — it loses all data when you switch off the computer. It holds currently running programs and active files. Moreover, more RAM generally means you can run more programs simultaneously without slowdown. Second, ROM (Read-Only Memory) is non-volatile — it retains data even without power. It stores the firmware (BIOS/UEFI) that boots up the computer. Consequently, the CPU can read from ROM but cannot write to it under normal operation.

Cache Memory: Bridging the Speed Gap

Cache memory sits between the CPU and RAM, acting as a high-speed buffer. It stores copies of frequently accessed data and instructions so the CPU can retrieve them almost instantly, without waiting for the slower RAM. Furthermore, modern CPUs have multiple cache levels — L1 (fastest, smallest), L2, and L3 (slowest among caches, but still far faster than RAM). As a result, cache memory dramatically reduces the time the CPU spends waiting for data — a delay known as latency. Therefore, even a small amount of cache produces a significant improvement in overall system performance.

Secondary Memory: Permanent and Large-Capacity Storage

Secondary memory provides permanent, large-capacity storage that retains data even after the computer is switched off. Examples include Hard Disk Drives (HDD), Solid State Drives (SSD), USB flash drives, optical discs (CD/DVD/Blu-ray), and magnetic tapes. However, secondary memory is significantly slower than primary memory. As a result, the operating system always loads programs into RAM before the CPU begins executing them. Therefore, both types of storage are necessary — secondary memory for permanent storage and primary memory for fast active processing.

Units of Memory: From Bits to Petabytes

Data in computers is measured using specific units that build progressively upon each other. Starting from the smallest unit, the hierarchy grows as follows:

BitBinary Digit — smallest unit of data (0 or 1)1 bit
Byte8 bits = 1 Byte — stores one character= 8 bits
KBKilobyte — 1 KB = 1,024 Bytes2¹⁰ bytes
MBMegabyte — 1 MB = 1,024 KB2²⁰ bytes
GBGigabyte — 1 GB = 1,024 MB2³⁰ bytes
TBTerabyte — 1 TB = 1,024 GB2⁴⁰ bytes
PBPetabyte — 1 PB = 1,024 TB2⁵⁰ bytes
📝
Exam Tip

Remember that memory units use base-2 (powers of 2), not base-10. Therefore, 1 KB = 1,024 bytes, not 1,000 bytes. This important distinction appears frequently in both board and competitive examinations.

3. Types of Software

Software is a collection of programs, data, and instructions that tell the computer what to do. Without software, hardware is simply metal and silicon with absolutely no purpose. Broadly, software falls into three major categories, each of which serves a distinct role in the overall computer ecosystem. Furthermore, these categories work together in layers — each layer depending on the one below it.

System Software: The Foundation Layer

System software manages and controls computer hardware and provides a stable platform for application software to run. It operates mostly in the background, largely invisible to the average user. Furthermore, system software consists of three important sub-types: the operating system, system utilities, and device drivers. Together, these three components ensure that all hardware works reliably and that the rest of the software ecosystem has a consistent base to build upon.

TypeExamplesPurpose
Operating SystemWindows 11, macOS, Linux, AndroidManages hardware resources and provides the user interface
System UtilitiesDisk Defragmenter, Antivirus, Backup toolsPerform maintenance tasks to optimise and protect the system
Device DriversPrinter driver, Graphics driver, Sound driverTranslate OS commands into signals that individual hardware devices understand

Why Device Drivers Are Essential

Your printer does not automatically communicate with your computer. Instead, the printer driver translates OS-level print commands into the specific signals that your printer understands. Therefore, without device drivers, hardware devices would not function at all. Similarly, a graphics driver enables the OS to communicate with the GPU, allowing smooth rendering of images and videos on screen. Consequently, installing the correct driver is always the very first step when you set up any new hardware component.

Language Translators: Bridging Human Code and Machine Code

Programmers write code in high-level languages like Python, C++, or Java. However, the CPU only understands machine language — pure binary. Consequently, language translators bridge this critical gap between what humans write and what the CPU executes. There are three types of translators, and each one works differently:

TranslatorWorks OnHow It TranslatesExample
AssemblerAssembly languageConverts assembly mnemonics into machine code, one instruction at a timeNASM, MASM
CompilerHigh-level languageTranslates the entire source code into machine code at once, before execution beginsGCC (C), Javac (Java)
InterpreterHigh-level languageTranslates and executes source code line-by-line during runtimePython, Ruby
Compiler vs. Interpreter — Key Difference

A compiler produces a standalone executable that runs faster at runtime. In contrast, an interpreter executes code directly line-by-line, making debugging much easier. Therefore, Python programs are simpler to test but generally run slower than compiled C programs.

Application Software: Built Directly for the User

Application software directly serves the user’s specific needs and goals. Furthermore, it runs on top of system software and depends entirely on it to access hardware resources. Examples include word processors (MS Word, LibreOffice), spreadsheets (MS Excel), web browsers (Chrome, Firefox), media players (VLC), games, and accounting tools. In contrast to system software, application software is purpose-built — each application solves one specific category of problem. As a result, the variety of available application software is virtually limitless, with millions of apps catering to every conceivable user need.

4. Operating System (OS)

The Operating System is the most critical piece of system software on any computer. It acts as an intermediary between the user and the computer hardware. Without an OS, every user would need to write machine code for each individual operation — even simply moving the cursor. Therefore, the OS is fundamentally what makes a computer usable by ordinary people without specialised hardware knowledge.

Core Functions of the Operating System

The OS performs several essential functions simultaneously to keep the computer running smoothly and securely. Each function addresses a different aspect of managing hardware and software resources:

FunctionWhat the OS Does
Process ManagementCreates, schedules, and terminates processes; allocates CPU time fairly to each running program
Memory ManagementAllocates and deallocates RAM to programs; prevents one program from accessing another program’s memory
File ManagementOrganises files in a hierarchical directory structure; controls read/write access and permissions
Device ManagementControls all input/output devices through drivers; manages device queues to avoid conflicts
Security & Access ControlAuthenticates users via login, enforces permissions, and protects data from unauthorised access
Error DetectionMonitors the system for hardware faults; alerts the user and attempts automatic recovery

OS User Interface: CLI and GUI

The OS provides users with two main types of interfaces. First, the Command Line Interface (CLI) requires users to type text commands directly. It is fast and powerful; however, it demands memorisation of specific command syntax. Examples include the Windows Command Prompt, the Linux Terminal, and the macOS Terminal. Second, the Graphical User Interface (GUI) uses windows, icons, menus, and pointers — collectively known as WIMP. It is far more user-friendly because users interact visually and intuitively through clicks and gestures. Moreover, modern mobile operating systems like Android and iOS extend this concept further with touch-based interactions.

Why the OS Makes Modern Computing Possible

Without the OS, every programmer would need to rewrite code to manage memory, handle files, and communicate with devices from scratch for every single new application. Furthermore, the OS provides a consistent, standardised platform so that application developers can focus purely on functionality rather than low-level hardware management. Consequently, the OS dramatically reduces complexity and makes software development far more efficient and accessible. As a result, the global software ecosystem — with millions of applications — exists only because operating systems handle all the underlying complexity on behalf of developers and users alike.

5. Boolean Logic and Logic Circuits

George Boole developed Boolean algebra in 1854. Today, it forms the mathematical foundation of all digital circuits and computers. Boolean logic operates on only two values: 1 (True/High) and 0 (False/Low). Furthermore, all computer operations — from simple arithmetic to complex comparisons — ultimately reduce to Boolean operations at the hardware level. Therefore, understanding Boolean logic is essential for understanding how CPUs actually work.

The Six Basic Boolean Gates

Boolean logic is implemented physically through electronic components called logic gates. Each gate performs one specific Boolean operation on its inputs and produces a single output. The six fundamental gates are:

GateNotationDescription
NOTĀ or ¬AInverts the input — 0 becomes 1, and 1 becomes 0
ANDA · BOutput is 1 only if ALL inputs are 1; otherwise the output is 0
ORA + BOutput is 1 if AT LEAST ONE input is 1
NAND¬(A · B)Opposite of AND — output is 0 only when all inputs are simultaneously 1
NOR¬(A + B)Opposite of OR — output is 1 only when all inputs are 0
XORA ⊕ BOutput is 1 when the inputs are DIFFERENT; 0 when they are the same

Truth Tables: Verifying Every Possible Combination

A truth table lists all possible input combinations alongside their corresponding output values. Furthermore, truth tables are the most systematic and reliable method for verifying any Boolean expression. Additionally, they help quickly identify whether two different Boolean expressions are logically equivalent, which is extremely useful in circuit simplification.

NOT Gate

ANOT A
01
10

AND Gate

ABA·B
000
010
100
111

OR Gate

ABA+B
000
011
101
111

XOR Gate

ABA⊕B
000
011
101
110

NAND Gate

ABNAND
001
011
101
110

NOR Gate

ABNOR
001
010
100
110

De Morgan’s Laws: Simplifying Complex Expressions

De Morgan’s Laws are two fundamental theorems that allow you to simplify and transform Boolean expressions effectively. They are especially useful in logic circuit design because they reduce the number of gates needed. Furthermore, they prove that NAND and NOR are universal gates — meaning you can build any other logic gate using exclusively NAND gates, or alternatively using exclusively NOR gates.

📐
De Morgan’s Laws

Law 1: ¬(A · B) = ¬A + ¬B  →  NOT(A AND B) = (NOT A) OR (NOT B)
Law 2: ¬(A + B) = ¬A · ¬B  →  NOT(A OR B) = (NOT A) AND (NOT B)

Logic Circuits: From Boolean Expressions to Real Hardware

A logic circuit is a physical electronic implementation of a Boolean expression, built using actual logic gates on a chip. These circuits form the fundamental building blocks of all digital systems, including CPUs, memory chips, and communication hardware. Moreover, combinations of basic gates create complex circuits such as adders, multiplexers, flip-flops, and encoders. For instance, a half adder uses one XOR gate (for the sum bit) and one AND gate (for the carry bit) to add two binary digits together. Consequently, millions of such gates inside a modern CPU perform billions of operations every second, enabling all the computation you depend on daily.

6. Number Systems and Conversions

Computers process all information in binary (base-2). However, humans primarily use decimal (base-10) for everyday counting and communication. Therefore, computer scientists use four main number systems to represent and work with data efficiently at different levels of abstraction. Understanding each system — and how to convert between them — is one of the most important practical skills in Class 11 Computer Science.

Base 2

Binary

Uses digits 0 and 1 only. This is the native language of all digital computers. Each binary digit is called a bit. For example, 1010₂ = 10 in decimal.

Base 8

Octal

Uses digits 0 to 7. It groups binary digits into sets of 3, making large binary numbers easier to read and write. For example, 12₈ = 1010₂.

Base 10

Decimal

Uses digits 0 to 9. This is the everyday number system that humans use naturally. However, computers do not use decimal internally for processing.

Base 16

Hexadecimal

Uses digits 0–9 and A–F. It groups binary digits into sets of 4 and is widely used in memory addressing and colour codes. For example, FF₁₆ = 255₁₀.

Conversion Methods: A Complete Reference Table

Converting between number systems is a critical exam skill that also appears frequently in programming work. Accordingly, the table below summarises the method and provides a concrete example for each major conversion path you need to know:

FromToMethodExample
DecimalBinaryRepeatedly divide by 2; read remainders from bottom to top13₁₀ → 1101₂
BinaryDecimalMultiply each bit by its place value (power of 2) and add all results1101₂ → 8+4+0+1 = 13₁₀
DecimalOctalRepeatedly divide by 8; read remainders from bottom to top100₁₀ → 144₈
BinaryOctalGroup binary digits into sets of 3 from right; convert each group110 100₂ → 64₈
DecimalHexadecimalRepeatedly divide by 16; map remainders ≥10 to letters A–F255₁₀ → FF₁₆
BinaryHexadecimalGroup binary digits into sets of 4 from right; convert each group1111 1111₂ → FF₁₆
HexadecimalBinaryReplace each hex digit with its 4-bit binary equivalentA3₁₆ → 1010 0011₂
OctalBinaryReplace each octal digit with its 3-bit binary equivalent37₈ → 011 111₂

Worked Example: Decimal to Binary Step-by-Step

🔢
Convert 45₁₀ to Binary

45÷2 = 22 R1 → 22÷2 = 11 R0 → 11÷2 = 5 R1 → 5÷2 = 2 R1 → 2÷2 = 1 R0 → 1÷2 = 0 R1. Subsequently, read the remainders from bottom to top: 101101₂. Verify: 32+8+4+1 = 45 ✓

Converting Between Octal and Hexadecimal

In addition to the direct conversions above, you also need to know how to convert between octal and hexadecimal. The most reliable approach is to use binary as an intermediate step. First, convert the octal number to binary by replacing each octal digit with its 3-bit binary equivalent. Subsequently, regroup those binary digits into sets of 4 (starting from the right) and convert each group to its corresponding hexadecimal digit. This two-step method eliminates any chance of error and makes the conversion completely straightforward. Therefore, binary serves as the universal bridge connecting all other number systems to each other.

🧠 Quick Knowledge Check

1. What is the decimal equivalent of the binary number 1011?

9
11
13
10

2. Which Boolean gate gives output 1 only when all inputs are 0?

NAND
NOR
XOR
AND

3. How many bytes does 1 Kilobyte (KB) contain?

1,000
1,024
512
2,048

7. Encoding Schemes: ASCII, ISCII, and Unicode

Computers store everything as binary numbers — including all text. Therefore, an encoding scheme is a system that assigns a unique numeric code to each character, symbol, or letter so that computers can store, process, and transmit text accurately. Moreover, without a commonly agreed encoding standard, text created on one computer would appear as garbled, unreadable symbols on any other system. As a result, encoding schemes are absolutely essential for digital communication and data exchange.

Why Encoding Schemes Exist

Consider the letter “A”. When you press the key, the computer needs a clear rule to decide which binary number represents it. Encoding schemes provide exactly that rule. Furthermore, since different languages and writing systems have vastly different character sets, multiple encoding schemes exist to handle the full diversity of human written language. Consequently, choosing the right encoding for the right context is an important decision in software and web development.

ASCII

American Standard Code for Information Interchange. Uses 7 bits to represent 128 characters — English letters, digits (0–9), punctuation, and control characters. Extended ASCII uses 8 bits for 256 characters. Additionally, ASCII assigns decimal 65 to ‘A’, 66 to ‘B’, and so on sequentially.

ISCII

Indian Script Code for Information Interchange. Developed by the Bureau of Indian Standards (BIS) in 1988. Designed specifically for Indian scripts — Devanagari, Bengali, Tamil, Telugu, and more. Moreover, it uses 8 bits and is backward-compatible with ASCII for the first 128 code positions.

Unicode

A universal standard that aims to represent every character in every language on Earth. Currently, it defines over 1,40,000 characters. Furthermore, it comes in multiple formats — including UTF-8 and UTF-32 — selected based on storage efficiency and system requirements.

UTF-8 vs. UTF-32: Which Encoding to Choose?

FeatureUTF-8UTF-32
Bits per characterVariable: 8 to 32 bits (1–4 bytes)Fixed: 32 bits (4 bytes) always
ASCII compatibilityYes — ASCII characters use only 1 byteNo direct compatibility; wastes space for ASCII characters
Storage efficiencyHighly efficient for English and Latin textLess efficient — every character always uses 4 bytes
Use caseWeb pages, emails, APIs, most modern softwareInternal processing where fixed-width simplicity is required
Example‘A’ = 1 byte (0x41)‘A’ = 4 bytes (0x00000041)

Real-World Impact of Unicode and UTF-8

Consequently, UTF-8 has become the dominant encoding on the internet because it is fully backward-compatible with ASCII and highly efficient for English text, while still supporting all Unicode characters without any limitation. In contrast, UTF-32 offers simplicity in text processing since every character has the same fixed size, but it wastes significantly more storage space in the process. Therefore, the choice between the two depends entirely on the specific use case and performance requirements. Furthermore, every emoji you send — 😊, 🎉, 🚀 — has its own unique Unicode code point. As a result, emojis are simply Unicode characters with special visual representations assigned by the Unicode Consortium. Additionally, over 98% of all web pages worldwide now use UTF-8 encoding, making it the true universal language of digital text.

🌐
Real-World Stat

Over 98% of all web pages today use UTF-8 encoding. Additionally, Unicode currently defines over 1,40,000 unique characters spanning 159 writing scripts from languages across the entire world.


Conclusion

In this chapter, we covered the complete foundation of Computer Systems and Organisation. We started with hardware — input devices, output devices, and the CPU with its Fetch-Decode-Execute cycle. We then explored memory types and units, from bits to petabytes. Furthermore, we distinguished between system software, language translators, and application software, understanding how each layer serves a different purpose. Subsequently, we studied the OS and its six essential functions, including process management, memory management, and security.

We also decoded Boolean logic, built complete truth tables for all six gates, and applied De Morgan’s Laws to understand universal gates. Moreover, we mastered all four number systems and their conversion methods using binary as the universal bridge. Finally, we understood why encoding schemes like ASCII, ISCII, and Unicode exist, and how UTF-8 and UTF-32 differ in practical use. Together, these concepts form the bedrock of all Computer Science. Therefore, mastering this chapter gives you the clarity and confidence to tackle every advanced topic that follows.

Frequently Asked Questions

Hardware and Memory

What is the difference between RAM and ROM?+
RAM (Random Access Memory) is volatile — it loses all data when you switch off the computer. It stores currently running programs and active data. In contrast, ROM (Read-Only Memory) is non-volatile — it permanently stores data, mainly the computer’s firmware (BIOS). Furthermore, the CPU can read from ROM but cannot write to it under normal operation. Therefore, think of RAM as a whiteboard (erasable) and ROM as a printed book (permanent and unchanged).
What is the role of cache memory?+
Cache memory sits between the CPU and RAM and acts as a high-speed buffer. It stores copies of frequently used data and instructions so the CPU can access them almost instantly, without waiting for the slower main RAM. Furthermore, modern CPUs have three cache levels — L1 (fastest, ~32 KB), L2 (~256 KB), and L3 (several MB). As a result, cache memory dramatically reduces the CPU’s idle waiting time and significantly increases overall system performance.
What does the Control Unit (CU) do inside the CPU?+
The Control Unit directs and coordinates all operations within the computer. It fetches instructions from memory, decodes them to determine exactly what action is required, and subsequently signals the appropriate components to execute each instruction. Furthermore, the CU manages the continuous flow of data between the CPU, memory, and all input/output devices. In other words, it is the manager of the CPU, while the ALU is the worker that actually performs all the arithmetic and logical computations on the data.

Software and Operating System

What is the difference between a compiler and an interpreter?+
A compiler translates the entire source code into machine code before execution begins. Consequently, the compiled program runs very fast at runtime. However, any syntax error stops compilation entirely. An interpreter, in contrast, translates and executes code line-by-line during runtime itself. Therefore, it is easier to debug but generally runs slower. Python uses an interpreter, while C uses a compiler. Moreover, some languages like Java use a hybrid approach — compiling to intermediate bytecode first, then interpreting it via the JVM at runtime.
Why do computers use binary instead of decimal?+
Computers use binary because electronic circuits naturally operate with two states — ON (1) and OFF (0) — corresponding to high voltage and low voltage. Therefore, binary maps perfectly to the physical reality of transistors and logic gates. In contrast, using decimal would require ten distinct voltage levels, which is far more complex, error-prone, and expensive to implement reliably in hardware. As a result, binary became the universal language of all digital computers.

Boolean Logic and Encoding

What makes NAND and NOR “universal gates”?+
NAND and NOR are called universal gates because you can construct any other logic gate — AND, OR, NOT, XOR — using only NAND gates, or alternatively using only NOR gates. This is extremely valuable in circuit manufacturing because it means a factory only needs to produce one type of gate to build any digital circuit. Consequently, chip designers often standardise on NAND gates for simplicity and cost efficiency. Furthermore, this property is directly proven using De Morgan’s Laws.
How is UTF-8 different from ASCII?+
ASCII uses only 7 bits and represents 128 characters — primarily English letters, digits, and punctuation. UTF-8, on the other hand, uses a variable-width encoding of 1 to 4 bytes per character. Importantly, UTF-8 is fully backward-compatible with ASCII — the first 128 UTF-8 code points are identical to ASCII. However, UTF-8 additionally supports over 1,40,000 characters from all world languages and scripts. Therefore, it has become the dominant universal encoding standard for modern computing and the web.
Why was ISCII developed when ASCII already existed?+
ASCII was designed exclusively for the English language and consequently could not represent Indian scripts like Devanagari, Bengali, or Tamil at all. Therefore, the Bureau of Indian Standards (BIS) developed ISCII in 1988 specifically to support Indian languages using an 8-bit encoding. Furthermore, ISCII is backward-compatible with ASCII for the first 128 code points. Today, Unicode has largely replaced ISCII for modern applications; however, ISCII remains historically significant and is still found in legacy Indian government systems.
Social Media Links
Scroll to Top