【IT基础中英笔记】符号系统与数据类型 | CompTIA ITF+

今天我开始自学Jason Dion在Udemy上的课程 - Comptia ITF+,并且给自己设定了在11月25号考试的时间。在回国之前,我就已经购买了相关的课程优惠券,我不想浪费它,所以就把这次学习当作是一个巩固自身IT基础的绝佳机会。可能在接下来记录的一些基础笔记中,我会较多地使用英文,因为考试内容全部是英文的。

全英文笔记

1. Input/Output/Processing/Storage (I/O) Cycle
  • Input
    • Function: This is the stage where data is entered into the computer system. It serves as the starting point for all computer operations as it provides the raw data that the computer will process.
    • Examples: Input devices like keyboards, mice, scanners, and digital cameras are used to enter data. For instance, when you type on a keyboard, the characters you type are the input data for the computer.
  • Processing
    • Function: The CPU (Central Processing Unit) is responsible for performing calculations, comparisons, and other operations on the input data. This transforms the raw data into meaningful information.
    • Examples: If you enter two numbers using the keyboard and then use a calculator application, the CPU processes the addition or subtraction operation on those numbers to give you the result.
  • Output
    • Function: Displays or transmits the results of the processing to the user or another system. This can be in a visual form (using a monitor or printer) or an audible form (using speakers).
    • Examples: When you see the result of a calculation on your computer screen or hear a sound when you play a music file, that is the output.
  • Storage
    • Function: Data is saved to storage devices for later use. This allows the computer to retain and retrieve the processed information as needed.
    • Examples: Hard drives, SSDs (Solid State Drives), and cloud storage are common storage devices. When you save a document on your computer's hard drive, it is stored for future access.
2. Notational Systems
  • Decimal Notation (Base - 10)
    • Structure: Utilizes a system of ten unique symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) to represent values.
    • Importance: It is the essential numerical system for everyday mathematical operations. It is intuitive and widely used in various aspects of our lives, from basic arithmetic to more complex calculations in different fields.
    • Usage in Programming and Databases: Used in many programming languages and databases for storing and manipulating numerical data. For example, when you define a variable to hold a whole number in a programming language like Python, you can use decimal notation.
  • Binary Notation (Base - 2)
    • Structure: Comprises only two different integer values, 0 and 1. Information within a computer is represented and stored as a sequence of binary digits (bits). Each bit represents a power of two, allowing for a compact and precise representation of numerical values.
    • Advantages: Its immediate compatibility with digital electrical systems is a significant advantage. Digital computers use binary notation at the hardware level because it corresponds to the on state (1) and off state (0) of electronic switches. This makes it fundamental for the operation of computer hardware.
    • Applications in Computer Science: Used in various fields such as algorithm design, data structures, and computer networks. For example, in networking, binary is used to represent IP addresses in some cases.
  • Hexadecimal Notation (Base - 16)
    • Structure: Employs 16 distinct symbols (0 -- 9 for the first ten and A -- F for the remaining six). Each digit's position in a hexadecimal number represents a certain power of 16.
    • Advantages: It lies in its compactness and compatibility with binary notation. It is commonly used in computer programming, memory addressing, and debugging processes. For example, when dealing with memory addresses, hexadecimal notation provides a more convenient way to represent and understand the values compared to binary.
3. Units of Measure
  • Data Storage Units

    • bit (Binary Digit) : The fundamental unit of data in computing, holding a value of 0 or 1. It's the building block for data representation, though typically used for data transfer rather than storage.
    • byte : The smallest unit for storing more complex data, equivalent to 8 bits. Commonly used to represent characters, numbers, etc. For instance, a single text file character usually takes 1 byte.
    • Nibble: Equivalent to 4 bits, used in hexadecimal notation and lower-level data manipulation, representing a single hexadecimal character.
    • Larger Units (KB, MB, GB, TB, PB): As data volume grows, we use these larger units. In the decimal system, each is about 1000 times larger than the previous. However, in digital storage, they represent powers of 2. For example, 1 MB is 1,000 KB in decimal but 1 MiB is 1024 KiB (MiB and KiB based on powers of 2).
  • Data Transfer Units

    • kilobits per second (Kbps): Represents 1,000 bits per second. This unit is used to measure the speed at which data is transferred between components in a computer or over a network. For example, a slow internet connection might have a speed of a few hundred Kbps.
    • megabits per second (Mbps): Equal to 1,000 kilobits per second. It is a more common unit for measuring faster data transfer rates, such as those of broadband internet connections. A typical home broadband connection might have a speed of several Mbps.
4. Data Types
  • Integers
    • Definition: Represent whole numbers, both positive and negative, without any decimal component.
    • Usage: Used in many programming situations where only whole numbers are required. For example, when counting the number of items in a list or representing an age in a database.
  • Floating - point Numbers
    • Definition: Used to represent real numbers (decimal or fractions). They are used when more precision is needed compared to integers.
    • Usage: In scientific calculations, financial applications, or any situation where decimal values are necessary. For example, when calculating the interest rate on a loan or representing the coordinates of a point in a graphical application.
  • Boolean Values
    • Definition: Represent the logical values of TRUE and FALSE. They are used in programming to control the flow of a program and consume one bit of storage.
    • Usage: In conditional statements (if - then - else) to determine which part of the code should be executed based on a certain condition. For example, if a user is logged in (TRUE), show a certain menu; otherwise (FALSE), show a different menu.
  • Characters
    • Definition: Used to represent individual letters, digits, punctuation marks, or any other symbol that can be represented in text. In most programming languages, characters are defined using single quotes, like 'a' or '1'. They consume one byte of storage and cannot be used for math operations.
    • Usage: In text processing applications, such as word processors or text editors, to handle individual characters in a document.
  • Strings
    • Definition: Sequences of characters used to represent and manipulate text. In most programming languages, strings are defined using double quotes, like "Hello, World!". Strings cannot be used for mathematical calculations.
    • Usage: In applications that deal with text manipulation, such as searching for a word in a document, concatenating text, or displaying messages to the user.
5. Character Encoding
  • ASCII (American Standard Code for Information Interchange)
    • Structure: A 7 - bit or 8 - bit character encoding system that can represent 128 characters. It uses a specific mapping of binary values to characters.
    • Limitations: It could not accommodate characters from non - English languages, special symbols, or even some commonly used English punctuation marks.
  • Unicode
    • Structure: A much more extensive encoding system that can represent over a million unique characters. It encompasses a collection of visual reference code charts and uses different encoding forms like UTF - 8, UTF - 16, and UTF - 32.
    • Advantages: It includes virtually every character from every writing system in the world, as well as many symbols, emojis, and special characters. It is a more comprehensive and inclusive character encoding system compared to ASCII.

Summary | IT Fundamentals+ (FCO - U61) Study Notes

1. Introduction
  • IT Fundamentals+ offers an introduction to various basic IT knowledge and skills.
  • It's a broad certification covering hardware, software, networking, data, software development, databases, and cybersecurity.
  • A pre - career certification to help determine aptitude and interest in IT and cybersecurity.
  • Six domains: 17% IT Concepts and Terminology, 22% Infrastructure, 18% Applications and Software, 12% Software Development, 11% Database Fundamentals, 20% Security.
  • 75 questions in 60 minutes, need 650 points out of 900 to pass.
2. Exam Details
  • No trick questions, watch for distractors.
  • Pay attention to bold, italics, or uppercase words.
  • Answer based on course knowledge.
3. I/O Cycle and Notational Systems
  • I/O Cycle
    • Input: data enters computer.
    • Processing: CPU transforms data.
    • Output: shows results.
    • Storage: saves data for later.
  • Notational Systems
    • Decimal (Base - 10): uses 0 - 9.
    • Binary (Base - 2): uses 0 and 1.
    • Hexadecimal (Base - 16): uses 0 - 9 and A - F.
4. Units of Measure
  • bit (Binary Digit).
  • byte (8 bits).
  • Nibble (4 bits).
  • Larger units: KB, MB, GB, TB, PB (approx. 1000 times larger each).
  • In digital storage, also use KiB, MiB, GiB (based on powers of 2).
  • Data transfer rates: Kbps, Mbps, Gbps, Tbps.
5. Data Types
  • Integers: whole numbers.
  • Floating - point Numbers: real numbers.
  • Boolean Values: TRUE or FALSE.
  • Characters: single letters, etc.
  • Strings: sequences of characters.
6. Character Encoding
  • ASCII: 128 characters for English.
  • Unicode: over a million characters, includes UTF - 8, UTF - 16, UTF - 32.
7. Data and Information
  • Data: various items like customer details.
  • Information: processed data with value.
  • Protect data and information with encryption, backups, access control.
8. Intellectual Property
  • Creations of the mind, protected by copyrights, trademarks, patents.
  • Copyrights: for literary and artistic works.
  • Trademarks: distinguish goods or services.
  • Patents: for inventions.
9. Digital Security Investments
  • Security Controls: Administrative, Physical, Technical.
  • Examples: security policies, locks, firewalls.
  • Backups and access controls are crucial.
10. Data Analytics
  • Raw data is processed into information.
  • Involves statistical analysis and techniques.
  • Helps in making data - driven decisions.

(中文版)
1. 输入输出处理存储周期(I/O Cycle)
  • 输入(Input)
    • 数据进入计算机系统的阶段。是计算机处理的原始数据来源,非常关键。
  • 处理(Processing)
    • CPU 对原始数据进行计算、比较等操作,将其转换为有意义的信息。
  • 输出(Output)
    • 将处理结果展示或传输给用户或其他系统,可以是视觉上(通过显示器或打印机),也可以是听觉上(通过扬声器)。
  • 存储(Storage)
    • 数据保存到存储设备(如硬盘、SSD 或云存储)供以后使用,使得处理结果能够按需保留和检索。
2. 记数系统
  • 十进制记数法(Decimal Notation)
    • 介绍:采用基数为 10 的系统,用十个独特的符号(0 - 9)来表示数值。
    • 重要性:是日常数学运算的基础数值系统,在各种编程语言和数据库中用于存储和操作数值数据,具有简单直观的特点。
  • 二进制记数法(Binary Notation)
    • 介绍:只有两个不同的整数值 0 和 1 的数字系统,计算机内的信息以二进制数字(1 和 0)的序列表示和存储,每个二进制数字(位)代表 2 的幂,能紧凑精确地表示数值。
    • 优势:与数字电子系统直接兼容,数字计算机在硬件层面使用二进制,因为它对应电子开关的开(1)和关(0)状态。在计算机科学的多个领域都有应用,如算法设计、数据结构和计算机网络。理解二进制对于掌握其他更复杂的记数系统(如十六进制)至关重要。
  • 十六进制记数法(Hexadecimal Notation)
    • 介绍:采用基数为 16,使用 16 个不同的符号(0 - 9 和 A - F)。常用于计算机编程、内存寻址和调试过程。
    • 优势:具有紧凑性且与二进制记数法兼容。例如,二进制数 1010 对应十进制的 10,也等同于十六进制的 A。理解十六进制对于处理数字技术的各个方面至关重要。
3. 单位
  • 数据存储单位
    • 位(bit):二进制数字,是计算机中最小的数据单位,取值为 0 或 1。
    • 字节(byte):计算机中最小的数据单元,可容纳一个 0 或 1 的值,8 位等于 1 字节。
    • 其他单位:随着数据量增大,还有千字节(KB)、兆字节(MB)、吉字节(GB)、太字节(TB)和拍字节(PB)等,它们之间大致是 1000 倍的关系。但在数字存储领域,这些单位实际上代表 2 的幂次方,所以还有 kibibytes(KiB)、mebibytes(MiB)和 gibibytes(GiB)等单位,例如 1 KiB = 1024 字节。
  • 数据传输单位
    • 当数据在计算机组件之间或计算机之间通过网络传输时,有数据传输速率的概念。例如千比特每秒(Kbps) = 1000 比特每秒,兆比特每秒(Mbps) = 1000 千比特每秒等。注意数据存储用字节(大 B),数据传输用比特(小 b)。
4. 数据类型
  • 定义:定义了在程序中可以存储和操作的数据种类。
  • 常见的 5 种数据类型
    • 整数(Integers):代表正整数和负整数,没有小数部分。
    • 浮点数(Floating - point Numbers):用于表示实数(小数或分数),当需要更高精度时使用。
    • 布尔值(Boolean Values):代表逻辑值 TRUE 和 FALSE,用于控制程序流程,占用 1 位存储。
    • 字符(Characters):用于表示单个字母、数字、标点符号或任何可以在文本中表示的符号,在大多数编程语言中用单引号定义,如 'a' 或 '1',占用 1 字节存储,不能用于数学运算。
    • 字符串(Strings):字符序列,用于表示和操作文本,在大多数编程语言中用双引号定义,如 "Hello, World!",不能用于数学计算。
5. 字符编码
  • ASCII(美国信息交换标准代码)
    • 是一种字符编码系统,用 7 位或 8 位二进制数来表示 128 个字符,主要用于英语,不能容纳非英语语言的字符、特殊符号以及一些常用的英语标点符号。
  • Unicode
    • 一种更广泛的编码系统,可以表示超过一百万个独特字符,几乎涵盖了世界上所有书写系统的字符以及许多符号、表情符号和特殊字符。包括 UTF - 8、UTF - 16 和 UTF - 32 等不同的编码形式。Unicode 在 ASCII 的基础上扩展,以适应各种字符和符号的编码需求。
相关推荐
尘佑不尘22 分钟前
shodan5,参数使用,批量查找Mongodb未授权登录,jenkins批量挖掘
数据库·笔记·mongodb·web安全·jenkins·1024程序员节
Iqnus_1231 小时前
vue下载安装
前端·vue.js·笔记
CLCNboss1 小时前
Mac安装Ruby
开发语言·经验分享·笔记·macos·ruby
聪明的墨菲特i2 小时前
Vue组件学习 | 二、Vuex组件
前端·vue.js·学习·前端框架·1024程序员节
东林知识库2 小时前
2024年10月HarmonyOS应用开发者基础认证全新题库
学习·华为·harmonyos
随便取个六字3 小时前
C++学习:类和对象(二)
c++·学习
OMGmyhair3 小时前
【 C++ 】C++11的初步学习
开发语言·c++·学习
呵呵哒( ̄▽ ̄)"3 小时前
尚硅谷-react教程-求和案例-优化3-整合UI组件和容器组件-总结优化-笔记
前端·笔记·react.js
大只因bug4 小时前
基于Springboot的在线考试与学习交流平台的设计与实现
java·spring boot·后端·学习·mysql·vue·在线考试与学习交流平台系统