croc logo

Neon-komputadór

Computer Users Manual, Ministry of Foreign Affairs and Cooperation, Democratic Republic of East Timor
2003


Languages

English
Portuguese

Index

Introduction
Chapter I: Hardware and Software
Chapter II: Networks and Communications
Chapter III: Operating Systems
Chapter IV: Applications
Chapter V: Basic Coding and Programming

Chapter VI: Basic Systems Administration


Introduction to Systems Administration
Resource Management
User, Account and Computer Management
Network Management
Security

Appendicies: Ministry Policy

Ministry Hompage

Resource management

The first decision a systems administrator must make for any computer system is that concerning hardware choices. This is actually more complex than it sounds, as not all computer hardware systems, or even for that matter personal computer hardware systems, are the same. Even for systems that have similar performance according to standard specifications, they vary in price, adaptability, upgradeability, operating conditions and design. The system administration of hardware resource management requires significant planning and balancing all these considerations with relative weighting according to the circumstances. Within a network the matter is further complicated by having to take into consideration the network environment and what other computers are on the network.

The starting point of making such a decision is determining what the computer is going to be used for and where it is likely to be used. A computer that is going to be used for training people on hardware repairs is not the same as a computer that is going to be used as a file server for a network - even though they may both be desktop computers. In the former case old second-hand computers, usually available for little if any cost, are strongly recommended. In the latter case, the emphasis should be on hard disk size, redundancy (backup devices) and reliability, network connectivity, power supply and the stability of the operating system. In both these cases the graphics capability of the monitor is not important. For a machine that is going to be used for publishing it is very important - as is the printer that such a machine has access to! A system that is going to be used primarily for word processing should be ergonomic in design, and so on.

The first chapter of this manual discussed many of the issues confronted by a system administrator in making hardware decisions. In the environment for the Ministry of Foreign Affairs and Cooperation and the environmental conditions of East Timor particular emphasis is placed on hardware reliability (especially power supplies and hard disk drives) and the capacity of the system to be upgraded. Unfortunately we don't always receive what would be best for us. The new Hyundai computers are in tower style, whereas the desktop style would have been preferable. The laptops are without a floppy drive, which is still an effective way of distributing computer information in this nation. It would be strongly preferable if we had reliable centralized printers that were designed for heavy use. Instead the Ministry is using distributed printers that are capable of small print runs and have poor operating ranges. Even our Compaq server suffers severe upgradeability and adaptability limitations to the point now where the newer Hyundai machines outperform it.

Similar - and simultaneous - decisions need to be made with regards to the choice of operating systems and applications. As the relative financial cost between hardware and software changes (software is becoming more expensive) the question of "total cost of ownership" is increasingly becoming a bottom-line for corporate and individual users - and this includes both current and probable future costs. From the perspective of a general computer user, the operating system and application needs to be something that is easy to learn and preferably very similar to what they have used before. From the perspective of the network manager, the operating system needs to be stable, capable of accessing a variety of hardware configurations, user operating systems and, most importantly, secure. Furthermore, specialist hardware, applications and operating systems are preferred for specialist tasks.

Several years ago these decisions were, in the general sense, relatively easy. The relatively low hardware requirements of the MS-Windows operating systems and their associated application provided were the most user-friendly and therefore the preferred choice for general use, whereas the resource intensive but strongly performing various versions of UNIX were considered to be the best choice for network servers. This situation, whilst still true in the general sense, has changed significantly. MS-Windows has made significant inroads into the server and networking markets and Linux increasingly comes with a range of user-friendly desktop applications. Making a lasting decisions on these matters is further complicated by the fact that whilst Linux computers can access MS-Windows machines on a network, the reverse does not apply; whilst Linux operates on a variety of hardware systems, MS-Windows now only operates on one. Whilst MS-Windows now has the advanced NTFS files sytem, Linux cannot be installed on it.

It is also necessary to take into consideration the overwhelming market-share and familiarity that people have with Microsoft systems (about 90% of the user and 40% of the server market), the peculiar conditions of East Timor (where the Berne convention for international copyright law in not in effect - for now), and the existing and likely future computers at the Ministry. Considering these conditions, some general rules can apply: For server computers, where adaptability, reliability and security is at a premium, open source, low-cost operating systems with proven reliability are preferred - and the best candidate to fulfill these current conditions is the Debian distribution of Linux (with Red Hat Linux a close second), once sufficient Linux training has been given to a systems administrator. For the newer Hyundai computers, with Microsoft NT and MS-Office pre-installed, there is no need to change the current configuration, although the OpenOffice suite is currently recommended above MS-Office upgrades and Mozilla webbrowser over Internet Explorer . Furthermore when the next edition of Windows XP is introduced (or whatever MS decides the name will be), the relative cost of that upgrade needs to be compared to the cost of data transfer and reinstallation of a desktop Linux system. Finally, the older Compaq desktop computers, running MS-Windows 98 and MS-Office are limited in their hardware upgradeability. With the exception of installing K-Melon, a faster and more efficient webbrowser, no other general changes are recommended.

This said software also needs to be configured and customized for particular circumstances. The general rule is that software shouldn't be installed unless it is going to be used - there is no need to generate additional work over compatibility and conflicts. This of course, includes machines which are specifically set aside for testing new software. Nevertheless, recognized and identified needs must be catered for. Particular printers, to give a simple example, often come with their own specialist software to make full use of the resources. A computer that is being used for intensive database work requires a good database program (e.g., PostgreSQL, MySQL) that satisfies ACID (Atomicity, Consistency, Isolation, Durability) tests. A computer used for publishing should have advanced desktop publishing and graphic manipulation programs (e.g., Abode Photoshop).

A specific task on the server level for system administrators is file system, drive and RAID (Redundant Array of Inexpensive Disks) management. Servers should have multiple disks whereas user computers typically have a single disk. Linux systems typically have multiple partitions, whereas MS-Windows systems usually have single drive partition under the C: drive. In terms of hardware user computers should use the inexpensive IDE drives, whereas server computers should use the more expensive, but more reliable, SCSI drives. In MS-Windows 2000, there are two types of drive partitions, primary and extended. Primary partitions are used directly for file storage and are limited to four primary partitions per disk. Extended partitions cannot be accessed directly and are used to divide a physical drive into more than four sections. All this can occur on a single physical drive, so for example a single hard disk (Disk 0) can have multiple partitions (disk drive C:, D:, E: and F:).

The classic school of thought for Linux disk partitioning suggests that multiple partitions should be installed for the root filesystem, one for /usr, one for /home, usr/local, /opt, /boot, /var and /tmp. The argument in this case is that a corruption of one filesystem will have little effect on the others and that extra space can be reserved for data (/home, /tmp, /var) without causing stress to the /root filesystem. It also means that older, smaller disks can be used for specific tasks, such as /boot filesystem. The primary disadvantage is that if one filesystem fills up more disk space will need to be added and the logical units will need to be split which complicates use. Given these problems, more recent ideas suggest that whilst multiple partitions should still exist, they should be kept to a minimum, with the entire filesystem on one primary partition with the exception of /var, /home, /opt and /usr/local. Of these, /var and /tmp should be stored on local disks and the others on the server in one or two logical drives on an extended partition. Another primary petition should include an emergency, minimal installation. An extended partition of four logical drives can be used for previously mentioned file systems and two swap partitions

Needless to say, creating disk partitions is an action that requires planning and care. As it is a data-destructive process, backups prior to partitioning are essential. In MS-Windows 2000 the Computer Management Administrative Tool provides the tools to create primary and extended partitions and logical drives, whereas in Linux Disk Druid is the most popular program. In MS-Windows 2000 Once created, the drive must be formatted according to a filesystem, either FAT or NTFS, the latter being able to support larger drives and with better resource allocation. FAT file systems can be converted into NTFS but not the other way around. Usually it is recommended to have the primary boot partition formatted with FAT, so that if the need arises the system can be booted under MS-DOS, and, assuming that only MS-Windows is ever going to be installed on the machine, to have other partitions formatted under NTFS.

RAID improves data security and performance and comes in different implementations (currently RAID levels 0 to 5), with the most common being RAID 0, 1 and 5. RAID 0 improves performance and RAID 1 and 5 are used for fault tolerance. RAID 0 uses disk striping, where two or more partitions on separate drives are configured as a stripe set. Data is broken into blocks ("stripes") which are written sequentially to all drives. RAID 1 uses disk mirroring, where two partitions are created on two drives with identical configuration. If one disk fails, there is no data loss because the other drive still contains the data. This is the easiest RAID implementation. The third option, RAID 5, uses disk striping with additional parity checks. It has better read performance than mirroring but requires a minimum of three disks.

With the falling price and increased speed of hard disk drives and their ease of implementation, RAID 1 is increasingly preferred, despite being slower than RAID 0 and 5 in terms of write performance and disk storage overhead. Unlike striping, disk mirroring can mirror any partition, whereas in MS-Windows boot and system partitions cannot be part of a striped set. It also must be mentioned that MS-DOS partitions do not support RAID and can make RAID-configured drives unusable. It is preferable that mirrored disks are on separate drive controllers to protect against controller failure. In MS-Windows 2000 RAID is implemented through the Disk Adminstrator, whereas in Linux a number of console commands need to be utilized including mkraid (initializes RAID devices, raidstart, which configures RAID devices), raidhotadd (adds spare devices to a running RAID array), raidhotremove (removes devices from a running RAID array) and raidstop (unconfigures a RAID device array).

Despite the best decisions concerning hardware, software and the effective implementation of fault-tolerance, data loss still occurs. Computer hardware systems are machines, and machines break for a range of environmental reasons. Computer software systems are the logical products of human endeavour and are likewise prone to error, malice and unexpected consequences. Because of this, prevention of data loss and system failure analysis is required. Whilst the specific implementation data protection these strategies is discussed in depth in the Security and Firewalls section, in terms of resource management, systems administrators must make an evaluation of the importance of system data and risks, the hardware, software and administrative costs of data backup solutions, virus protection and other forms of preventative maintenance (such as ensuring that computers are kept in clean environments and that users keep the environment that way, regular disk defragmentation, rescue/system recovery disks etc).

In MS-Windows 2000 system diagnosis tools include Task Manager, Services Manager and the Event Viewer and Diagnostics. Every time an application is started or command entered MS-Windows 2000 starts one or more processes to manage the program. Some of these are interactive, that is, they are started via a user input device and remain in the foreground until another program is interactively initiated. Other processes operate in the background. These include those started by the operating system and which configured to run independently (e.g., a batch file that includes the 'At' command).

The Task Manager is used to check the status of applications, processes and system performance. It can start and end tasks as needed. The Services Manager, as the name indicates, activates, stops and configures specific server services, such as alerts, file and directory replication, event logs, printer spooling, UPS support and so forth. Auditing policies also need to be established to determine what sort of events are recorded and directory, file and printer security. The Event Viewer keeps a log of application, security and system events, classifying events as Information, Success, Failure, Warning and Critical Errors. The Event also determines the source, category, event, user account, computer, description and detailed data or error codes. Such logs must be archived.

Linux is considered one of the more stable system operating systems available, but nonetheless system failures - and far more commonly application errors - do occur. Graphical tools for diagnosing system failures are not as developed in Linux as they are in MS-Windows and almost all diagnosis occurring at the terminal window level with command line utilities. There are some recent exceptions to this (System Monitor) which manages applications and processes like the Windows Task Manager. Further, Linux makes detailed hardware checks at startup, which can be accessed by typing /bin/dmseg on the command line or by viewing the /var/log/boot.log file. Furthermore, a variety of diagnostic tools exists under the /proc filesystem. Like the MS-Windows Event Viewer, Linux documents almost every aspect of its operation. The principle directory for these logs is /var/log, and the initialization file is found at /etc/syslog.conf.

Finally, one aspect of resource management that must have priority for any system administrator, and especially for a network, is the establishment and maintenance of a database that includes the name of each computer, hardware devices, the operating system and applications installed, and records of failures, repairs, upgrades and - often overlooked - location. Naturally enough, the more homogenous a network the less the difficulty is required in setting up such a database, but of course this isn't always possible. In fact, the less homogenous a network of computers, the greater the requirement of such a database. Having these records readily available provides a history and comparisons, thus improving the system administrator's ability to perform system diagnosis and predict future requirements.


Ministry of Foreign Affairs and Cooperation, GPA Building #1, Ground Floor, Dili, East Timor

valid XHTML 1.0! valid CSS Level2! Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Unicode encoded use any browser!

Website code and design by Lev Lafayette. Last update August 20, 2003

Hosted by www.Geocities.ws

1