From Windows to Linux

Date: 2017-12-25 (and 2022-12-06)


There's strictly no warranty for the correctness of this text. You use any of the information provided here at your own risk. I can not be held responsible, if you damage your system by using any of the information provided her.


Contents:

  1. Introduction
  2. Software-Development
  3. Free Software in Contrast to Proprietary Software
  4. The Unix-Roots of Linux
  5. Multiple Users and User-Rights. The System Administrator "root"
  6. File-Management. The Linux Directory-Tree
  7. The Shell
  8. A Design-Principle of Linux-Programs
  9. Linux-Distributions
  10. System Control Tool
  11. Managing the Installed Software
  12. Desktop Environments and GUI-Toolkits
  13. Applications
  14. Games
  15. Running Windows-Programs on Linux
  16. Perl and Python
  17. Mischievous Software
  18. Conclusion


1. Introduction

Preface: A Dialogue from "Star Wars: Andor", Season 1, Episode 5:

Nemik's working on some device.
Andor: "That's an old one."
Nemik: "Old, and true, and sturdy. One of the best navigational tools ever built.
        Can't be jammed or intercepted.
        Something breaks, you can fix it yourself."
Andor: "Hard to learn."
Nemik: "Yes, but once you've mastered it - you're free."

Many people have difficulties, when they try to switch from Windows to Linux. That's because "Linux is not Windows". Of course there are many books on the subject, but this site tries to give an overview of the things that are very unfamiliar to Windows-users, when they boots into Linux for the first time.

Windows is an operating system. Linux is another operating system. So Linux is not a program for Windows.

To understand the differences, we first have to take a look at how software is being developed in general, and then at the principles of free software. This may seem a bit weird and rather unfamiliar, but it is the most important thing, if you want to understand, what Linux is all about.


2. Software-Development

Computers can be programmed. Operating systems are programs.
The kernel (= the core) of Windows and the kernel of Linux are written in a computer-language named "C". Other parts of Windows are written in a language named "C++".
Code in C or C++ can not be understood by the computer directly. The computer can only process socalled "machine code", which consists of pure numbers, stored in bytes of memory. Although that kind of machine code could be written directly (or slightly abstracted in the language "Assembler"), there are some disadvantages, as machine code is dependend on the type of the processor in the computer, while C-code is (to a certain extent) system-independent.
So, if C is used, a program is needed, to translate C-code (source code) into machine code (executable files, ".exe"-files on Windows). Such a progam is called a "compiler".

Windows doesn't come with a C-compiler, as that is seen by Microsoft as an application, like Office, for example. So, there's a C/C++-compiler called "Microsoft Visual C++", but there are also other compilers such as "Borland C++" or the free compiler "gcc/g++", which comes for Windows in a package called "MinGW".
Actually, for Windows 10, Microsoft now offers a free edition (you don't have to pay for it) of its development-tools too, called "Visual Studio Community".

Code in C or C++ can be written simply in a text-editor like "Notepad", but the packages above come with a special editor-program for development. Such a program is called "IDE" ("Integrated development environment"). A lightweight, independent IDE would be "Geany", for example.
Nevertheless, many programmers still like to write code in text-editors. On Linux, two very powerful editors have been created. One is called "vim" (Windows-version "gVim"), the other "Emacs".
Both are much more capable than "Notepad" on Windows, but their behaviour and keys are different from what you are used to from Windows-applications. This website is written with vim.

So, for example, if you have this C-code in a text-file:

#include <stdio.h>

int main() {
    puts("Hello World!");
    return 0;
}

you can run the compiler on it, and it will create an executable "a.exe" from it.

The programmers at Microsoft write the source-code for Windows. So Microsoft knows that code. But if you buy Windows in a store, you only get the executable (and the rest you need, to be able to run it), Microsoft won't hand out the source-code. So as a Windows-customer, you can neither read (study) nor change the source-code of Windows. You're not allowed to. And you don't get the opportunity, because the source-code is kept by Microsoft.


3. Free Software in Contrast to Proprietary Software

Windows is a product of the software-company called Microsoft. Microsoft produces it and sells it to stores. You, the customer, buy it there (or maybe you can buy it online). You pay money and get in return certain rights, what you can do with the software. What these rights are in detail, is regulated by an "end-user license agreement" (EULA).
If you buy Windows, you sure may install and use it on a single machine.
You probably may not install it on a second machine.
I'm not sure, if you may resell your copy to a third person, if you delete it from your own computer. Maybe, maybe not.
Since Windows XP you have to contact Microsoft and "activate" Windows with a certain registration-code, so that Microsoft can control, how often your copy is installed.
You don't get the source-code, so you can neither read nor alter it. It is proprietary, "closed source"-Software.

That's probably what you know, that's what you're used to.

Linux takes a totally different approach on all this.

And that happened like this:
Once upon a time, in the 1970s, a computer-scientist named Richard Stallman worked in a computer laboratory somewhere in the USA.
As computing power was very limited back then, several people had to work on a single machine. Each person had his or her own workspace and his or her own keyboard (called "terminal"), but all keyboards were attached to one computer.
These people shared the software on that computer, and everybody could read its source-code and change it, if he or she wanted to.
Then, one day, the laboratory bought a program from an external commercial software-company. Like Windows (which didn't exist back then), it was closed source-software. So the people at the laboratory (the user) couldn't read its sources or change the program.
More and more companies produced proprietary software, and more and more programs of this kind were bought.

Richard Stallman annoyed that. He wanted software to be like it had been before for him. He wanted software, whose sources could be read and changed by everyone. So he became an activist and founded the "Free Software Foundation".
The foundation said: "Commercial companies may produce proprietary software, but we produce our own kind of software, free software."

In English, the word "free" has two meanings: Either as in "free beer" (you don't have to pay for it) or as in "freedom". "Free software" means "free as in freedom" - so it doesn't necessarily mean, you don't have to pay for it: Maybe you have.

So, the Free Software Foundation wanted to defend and promote the rights of the user. They demanded, that free software should give four freedoms to the user: The user should have the right to:

Proprietary software can control the computing of the user. Free software enables the user, to control his own computing.

The Free Software Foundation developed a license for that kind of software, which guaranteed the user these four freedoms (or rights). It is called the "GNU General Public License" (GPL).

Notice, that this license also grants the user the right to sell copies of the program (as part of the right to distribute it). That means, although today most free software is available somewhere on the internet legally without any charge, someone else may try to sell you the same software for a charge, be it in a store or on the internet. He is allowed to do that, and if you buy the software from him, you'll have to pay for it.

Then, the Free Software Foundation launched the GNU Project to develop free software. Many command-line programs for an operating system were written, but also programs like "GNU Chess". Richard Stallman wrote the powerful editor "GNU Emacs".

The GNU Project wanted to develop a whole free operating system. But the GNU kernel was not yet finished, when in 1991 Linus Torvalds started a project to develop his own kernel called "Linux".
Many developer joined Linus Torvalds using the internet, that just started back then.

The kernel "Linux" finally became usable, and the system-utilities of the GNU Project were added. So, a free Unix-like operating system for Intel-processors was created. Its name should be GNU/Linux, but as just "Linux" is shorter, most people call it just that (which may not be fair altogether, as the Free Software Foundation and the GNU Project were largely responsible for it too, but there you have it: The world's not fair). The Linux-kernel also uses the "GNU General Public License" (GPL, version 2).

So the way of developing new software is rather different:

It is quite obvious, that the internet increased the development of more complex programs of free software drastically. "Linux is a child of the internet."

The principles above also mean, that the developer of proprietary software has a strong interest in market share. He wants to sell as many copies as possible, as that's how he earns money.
To free software or its developers on the other hand, market share is totally irrelevant. Also, the user, who downloads free software (especially without charge) isn't a customer of anybody. "The Customer is King" - that may apply to someone, who has bought proprietary software. But not to someone, who downloaded free software. If he doesn't like it, if the software doesn't do, what he expects from it, if he won't download it again: The free software simply doesn't care.
Free software doesn't have to look good or be modern either. It doesn't have to survive on the market. There is no market for it. If a free program is that bad, that it doesn't have a single user, it still doesn't care.
Keep that in mind, if you deal with Linux. You're not customer (unless you pay companies for special support-services), you're not king. You just have the four freedoms (to use, to study and change, to distribute (with or without changes)).
If you don't like a program or the GNU/Linux-operating-system in general, don't complain, it's useless.
It would be up to you to make it better. If you can't do that, because you lack the skills or the time, all you can do is avoid using it (until someone else improves it).

Now you should be able to understand, why many free programs for Linux may seem kind of unfinished, unpolished, old-fashioned or experimental, and that that's a result of the underlying principles of their development, which are rather different from the common principles of capitalism.

When a software is free, it doesn't necessarily mean, that it's technically better. Maybe the proprietary software does the job better. But still the issue remains, that proprietary software can make you, the user, lose control over your own computing and, for example, spy on you.
Or like Linus Torvalds put it: "Software is like sex: It's better when it's free."

In this interview Richard Stallman explains in detail, what Free Software is about.

In another lecture at the University von Calgary (2009-02-03) Stallman also explains, that the postulation of the "Four Freedoms" can only relate to software, because these freedoms depend on its very nature. Only when dealing with compiled software, there is an executable program and a source code that belongs to it, which may be held back or changed. Only software and other digital data can be copied countlessly and can be distributed quickly and easily.
That's not the case with the things of everyday life like a table or a chair, for example. A table itself doesn't have a source code, and it can't be copied easily millions of times. There may be a technical manual, how to produce the table, but that would be something else than the table itself.
So the imbalance in the relation between the producer and the user of a program, and the claim of the "Four Freedoms" of the user is a problem, that only occurs with computer software and hasn't caught the attention it deserves yet. But the more the digitalization of the world progresses, the more important these questions become.


4. The Unix-Roots of Linux

At the beginning of the 1970s, in the very early days of computing, scientists needed a professional operating system for their large machines too. So an operating system called "Unix" was developed by AT&T Bell Laboratories, by people like Ken Thompson, Dennis Ritchie and Brian Kernighan. The first version of Unix was released in 1973.
To be able to write commands for Unix, Dennis Ritchie created the programming language C.

In 1991, Linus Torvalds was a student of computer science at the university of Helsinki. As a teenager in the 1980s, he had the home-computers Commodore VIC-20 and Sinclair QL. There wasn't much software for these kind of exotic and limited machines, so unlike most other teenagers he didn't spend his time playing games, but learnt how to program these computers on a fundamental level, so that he could write his own programs.
As a student, he bought a PC with a 386-processor. At university, there was a course about Unix, based on the book "The Design of the UNIX Operating System" (Maurice Bach, 1986).
So Linus Torvalds had the idea to write a Unix-like operating system for his 386-PC. He announced his plans in a newsgroup, got help and succeeded.

So Linux is basically a free clone of Unix. Back then, nobody thought, it would one day kind of replace the well-respected commercial Unix.
Linux' mascot is a penguin called "Tux":

 

Unix had some rather advanced features and therefore was in the 1970s way ahead of its time. Especially it supported multi-tasking and the management of multiple users and user-rights.
In comparison, Microsoft DOS (MS-DOS), first released in 1981, supported just a single task and a single user.

In Unix/Linux the support of multi-tasking means, that tasks run in separated environments. They can crash like in every other operating system. But in Unix/Linux, the rest of the system usually isn't affected by this, but stays intact. Therefore, Unix/Linux is known to be a rather stable operating system.


5. Multiple Users and User-Rights. The System Administrator "root"

As Unix was written to be used on a workstation, probably shared by several people using terminal-keyboards (see above), it supported multiple users and offered a management-system for user-rights. And so does Linux. If you start Linux, after a while the boot-process usually stops at a login-screen. There, you have to enter your username and your password. After that, your user-settings are loaded and you can access the system.

Today's Linux-versions probably offer a graphical desktop, but that's not even required. It is possible to log into a Linux-system just in text-mode, which looks like MS-DOS. Even today, especially servers often don't need graphical interfaces.
Even if the boot-process takes you to a graphical environment, you can still switch to other virtual consoles with text-mode, using the keys "Ctrl+Alt+F1" to "Ctrl+Alt+F6".
With "Ctrl+Alt+F7", you get back to the graphical desktop.

The files, a user works with, are stored in a directory called "/home/username". So if there's a user with a username "john", his files would be below /home/john".
Files and directories are owned by a certain user.
To each file and directory user-rights are stored. There are three kinds of permissions: To read (r), to write (w) and to execute (x).
If "john" creates a file "hello.txt", he becomes owner of that file and can set the file-rights for others, that is, determine, if others are allowed to read, write or execute the file.

Then, there's the system-administrator. His username is always "root". root is allowed to do everything on the system. He can even easily delete system-files, that are essential for the system to run. So root is allowed to break the system in any way.
root can also access all files of all users, no matter what user-rights they have set.
And root can also set the user-rights for the owners of files. These are again read (r), write (w) and execute (x).
If "john" owns a file, "root" can change that, he can give ownership to himself or to a third user, like "paul" maybe. He can also withdraw john's permission to read his own file, for example.

Users can also be members of groups. For groups, there are again the three rights mentioned.

So all in all, the user-rights, that are stored with the file, are: "rwx (owner) rwx (group) rwx (others)".
An example: Let's say, "john" creates the file "hello.txt". It contains somehow secret text. He, himself, wants to be able to read and write the file, but not to execute it, as it's a text-file. Groups and others shouldn't be able to do anything with the file. So, he would set the rights for the file to: "rw- --- ---".

Directories can only be accessed, if the right "execute (x)" is set for them. "john" has his home-directory "/home/john", and "paul" has his home-directory "/home/paul". If the "x"-right of john's home-directory is withdrawn for groups and others, and "paul" tries to switch to "/home/john", he will fail and get an error about lacking permissions.

This system of user-rights contributes to the security of the Linux-system.

It also improves the security when it comes to viruses. At the moment, viruses aren't that much of a problem for Linux-systems anyway, because globally there are much more installations of Windows-systems, so developers of viruses are more interested in attacking Windows.

It is also possible to run a Linux-system with just a single user. This user will also have to be the system-administrator "root" then. For security-reasons it is recommended though, to use an account for a normal user as well then.
So if you run a single-user-system, you usually work as an ordinary user. Only when you have to change system-files, you become "root" for a short while. This is done with the command "su" on the shell ("su" is an abbreviation of "superuser").


6. File-Management. The Linux Directory-Tree

On Windows, the files on the harddisk are displayed under "C:\".
Maybe you remember, that drives "A:" and "B:" were reserved for floppy-disk-drives.
So, on Windows, an executable file of a program could be for example:

C:\Program Files\MyProgram\myprog.exe

System-files are stored in the directory

C:\Windows

especially in:

C:\Windows\System

For example system-libraries ("Dynamic-link libraries", suffix ".dll") can be found here.


On Linux, this is all rather different. There is a directory-tree instead.
The separator for the directory-names is the slash "/", not the backslash "\" (like on Windows).
As mentioned above, files of a user are in his home-directory, so john's files are in "/home/john".

Only the user "root" has the write-permission for most system-files.

The top level directory (= the directory-tree's root) is at just "/". If you use a file-manager like "dolphin", "nautilus" or "PCManFM" and look inside "/", you see some of the following directories:

bin
boot
dev
etc
home
lib
media
opt
proc
root
tmp
usr
var
windows

These directories have the following contents or meanings:

Notice, that unlike Windows, Linux is always case sensitive. That means, "/home/john" would be something else than "/home/John".
They would both exist and would be two completely different directories.

So, the Linux directory-tree is not connected to a certain diskdrive (like Windows "C:\"). Instead, the files, from which Linux is booted, are spread across the whole directory-tree. Most of them are system-files. If the files of an external drive or a USB-stick are needed, they will be found below "/media" (or older: "/mnt"), where the drive or stick will be mounted.

Ordinary users usually stay in their home-directory ("/home/john" for user "john"). They can create a (nearly) unlimited number of own sub-directories there.
From their home-directory, users can execute programs though, that are found in the directories "/usr/bin" or "/usr/local/bin". Because of the $PATH-variable, they don't have to switch to these system-directories to execute these programs. They can stay in their home-directory, and the system finds the programs for them.

When you see the Linux directory-tree for the first time, it will sure be very unfamiliar to you. "What on earth is that?", you may ask.
But consider: Most of its directories contain files of the type, Windows usually hides from you in "C:\Windows\System". When was the last time, you looked into that directory?
And: If you continue using Linux, you will get used to the directory-tree in just maybe a few days or weeks. It is around for decades, though. It probably will be for some more decades. I am using that structure for more than 10 years now.
So, the knowledge about the Linux directory-tree is something, you learn for life. You may teach it to your grandchildren one day. It's worth it.


7. The Shell

To start a program in Windows, you double-click on its icon on the desktop. Or you click on the start-menu, find the program you want and click on it. Icons and menu-entries contain links to the executable files in directories.

This can be quite similar on Linux, but it may not be installed correctly. The program-files sure are in the directories, but not every program installs desktop-icons and menu-entries correctly. That's why you check the real directories more often or maybe you start programs by opening a terminal and typing in the program's name. On Linux, there are a number of terminal programs, like "xterm", "rxvt" or "konsole". They are similar to Windows' command prompt.

If you start a terminal-program on Linux, you get a window with a prompt to enter system-commands. (The appearance of that window, like the size of the fonts inside the window, can be configured.) MS-DOS had its own shell-language with commands like "dir" or "copy".
Although the commands on Linux do something similar, the language is different, because it is inherited from the Unix-tradition. So inside a terminal runs a shell, usually the one which is called "bash" ("GNU Bourne-Again Shell").
Here a some basic commands in bash:

Deleting Files: Files are deleted with the command "rm" (remove). It's used with the option "-r" to delete directories. There is no safety catch here (like a "recycle bin" or such). Basically, removed data is lost forever. In a way, at this point Linux hands you a loaded gun and doesn't protect you, if you shoot yourself in the foot. Linux treats the user like a grown-up, who is responsible for his own actions. So you have to take care yourself.

There are lots of options to the shell-commands. You can get help on them if you type "man " and the command's name ("man" for "manual"), like "man ls", for example.

Also, a few symbols can be used in the shell:

So,

mv ./myfile.txt ~

means: "Move the file 'myfile.txt' from the current directory to my home directory."

One feature of bash makes it especially usable. It is called "tab-completion": If you enter a few letters of the name of a command or a file, you want to use, and press TAB then (the tabulator-key), bash automatically completes what you wrote to the command-name or the file-name, it thinks, you wanted. So you don't have to type in all of the filenames. Just a few letters, then TAB, and in many cases, you're done. If several names are possible, bash shows them, when you press TAB twice.

bash can be configured in many ways. If you have for example a default working-directory inside your home-directory, you can configure bash, so that it starts in this directory every time.
Desktops let you define a keyboard-shortcut to open terminals and other programs. So you just press that shortcut and are in bash in your default directory. That's rather quick and useful. That's why on Linux it's still common to use the command-line.

The language of bash is also a programming-language. Among other things, bash-scripts are often used as installation-routines. They also play an important role in the Linux boot-process.

In 2016, also Microsoft realized the power of bash and integrated it into Windows 10.


8. A Design-Principle of Linux-Programs

The design of many Linux-programs follows a principle, that has its roots in the Unix-tradition. According to this principle, a program should only do one thing, but do this thing well.

To accomplish more complex tasks, a program should work together with other programs.
This philospophy leads to many small programs, whose functions may not be obvious for the user at first.

For example the program "k3b" is designed according to this principle. It offers a graphical user interface to burn CDs or DVDs. In the background, k3b works together with many small command line programs, that accomplish single tasks. Just to give an example, in this case these programs are (you don't have to memorize them):

For example, "mkisofs" creates a filesystem and ".iso"-files, "normalize" normalizes the volumes of audio-files.

The user doesn't come in touch with these small programs, he simply can use the graphical interface of "k3b".
He may encounter these programs in the shell though and may not know, what they are used for. Of course he may also use these programs, if he knows, how to do it (it isn't difficult in the case of "normalize").

Such small programs for specific tasks can be used by several larger applications, and they can be combined in different ways. So not only "k3b" may use such a small program like "normalize", but also maybe an audio-editor.
In a way, these smaller programs act like modules, that a larger application can import, so that the code for the program's task has only to be written once.

On Windows on the other hand, there are large applications that try to accomplish many tasks at once.
Some Linux-programs are designed more like these applications for Windows. Firefox, Thunderbird or the email-client "kmail" for example.

Nevertheless a Linux-user should have heard of the principle, that a program should only "do one thing and do it well", in order to understand, why there are so many small programs on a Linux-system, which may seem weird, because their purposes may not be obvious at first.


9. Linux-Distributions

When you buy Windows, or when you buy a computer, on which Windows is already installed, you get just Windows, the operating system.
If you want to use applications, such as games or Microsoft Office, you have to buy them separately.

When it comes to free software, it's not necessary to separate the operating system from the other applications. So, Linux usually comes with the operating system plus a huge amount of free software applications (like LibreOffice, for example).

For an ordinary user it would be too much and too difficult work to download the Linux-Kernel and the GNU Utilities and build his own Linux-system. Too many configurations would have to be made.
That's why several companies and projects create huge software-bundles containing a configured GNU/Linux operating system and applications. These bundles are called "Linux distributions".

Some Linux distributions are sold for money on a DVD in a package with a booklet. Most distributions can be downloaded for free on the internet. You can download either single files of the distribution or a ".iso"-file, that you'd have to burn to a DVD yourself.

Well-known distributions are for example:

Some of them are built by Linux-companies (like "Red Hat" or "SuSE"). There is often a free "open" version and a commercial "enterprise"-version then. These companies earn most of their money by offering support-services for other business-companies.

There are also specialized, minimalistic distributions, that come with just a smaller selection of applications (like "Arch Linux" for example).

And there are very small distributions (of maybe 250 MB in size), that can be installed, but can also run from a boot-CD or from an USB-stick (like "Puppy Linux" for example). At that size, on modern machines the whole distribution can be loaded into memory. Puppy Linux is actually quite usable for general purposes, but often these kind of distributions are used as recovery-systems.

As the distribution is kind of responsible for the system-configuration, most distributions come with their own system-control-tool (in Windows this part of the operating system is called "control panel").

For a beginner, I'd suggest starting with one of the bigger distributions mentioned above. It depends on what distribution is able to recognize your hardware. I, personally, often get good results with openSUSE. I'm using it at the moment.

Most Linux distributions today also contain software, that doesn't meet the requirements for free software. openSUSE also includes some packages of proprietary software, but separates them from the free software, so you have the choice, what to install and what to use.
The Free Software Foundation lists distributions that contain only free software.
Also the policies of other larger distributions on the topic of free software are listed.


10. System Control Tool

In Windows, there is the "control panel". You can configure hardware, software-installations and system-settings there.
Advanced settings are stored in the Windows Registry, which can be edited with the program "regedit.exe", found in the directory "C:\Windows".

In most Linux distributions, there is also such a system control tool like the Windows control panel. The one of openSUSE is called "YaST2" or "yast2" ("Yet another Setup-Tool", version 2). Here's a screenshot of it:

 

In fact, the options, that can be set there, are stored in maybe a few hundred configuration-text-files in the directory-tree, especially in "/etc".
So, YaST2 is a graphical front-end to make changes to that configuration-files in a convenient and relatively secure way.
In Linux, there isn't a registry-database like in Windows. Instead, all of the system's configuration is stored in plain text-files in the directory-tree. Most likely, you need to be "root" to edit them. But if you are, nobody keeps you from doing damage by editing.

Many programs let users have their own program-configuration. Therefore, they write a configuration-file into the user's home-directory. Usually, these files have the status of "hidden". This is simply done by putting a single dot right in front of the filename. The configuration-file for vim would be ".vimrc" in the user's home-directory for example.


11. Managing the Installed Software

Software for Windows usually comes with an installation-program "setup.exe", installs itself in directories like "C:\Program Files", adds DLL-libraries to "C:\Windows\System" and makes changes to the Windows Registry.
You can uninstall the software using the control panel. But often, remains of the program stay in the system, especially in the registry. This used to slow Windows down after installation and deinstallation of a number of programs.

Linux-Software comes in socalled "packages". They may be called either ".rpm" (Redhat Package Manager) or ".deb" (Debian Package). As OpenSUSE uses rpm, I'm most familiar with that one.
During the process of system-installation, usually a good selection of several hundreds or thousands of rpm-packages gets installed.
rpm-files contain the files to install, descriptions of the package and the files and the information, where to install the files. YaST2 knows the packages of the distribution. It can be used to install them after the system-installation. YaST2 offers a graphical user interface, where you can just click on checkboxes to install or deinstall the packages you want. When a package is installed, information about it is written into a special database on the system.
Libraries also come in packages. As programs often depend on libraries, the package of a program can have dependencies to certain library-packages. The informations about these dependencies are also stored in the rpm-package, so YaST2 knows about them too. So if you ask YaST2 for example to install the audio-application "audacity", it will also install the required libraries of the alsa-sound-system, if they are not already on the system.
As far as I know, with the rpm-system clean installations and deinstallations without any remains are possible. So Linux-systems don't slow-down, even after years of usage.

Some more detail: Most software for Linux is written in C or C++. So developers write source-code in these languages. On the websites of their projects, they usually provide the source-code in a file with the suffix ".tar.gz". That's basically a file-archive, similar to ".zip" on Windows. "zip" is also available on Linux, but traditionally, ".tar.gz" is used to archive files and directories.
To extract a file "myfile.tar.gz", one would write in a shell: "tar -xzvf myfile.tar.gz".
The extracted source-code would have to be compiled for a concrete platform. Today, there are 32bit- and 64bit-systems for example.
There can be complications in the process of compiling software. Maybe a library on which programs are dependent, have changed since the time, their source-code was written. Or the source-code just has bugs.
So the people, who create the Linux-distributions, manage to compile source-code for the platforms, their distribution supports. As a result, there are executable files, that work on these platforms. These files are then put into rpm-packages.
So, if a user is looking for a certain program, he should check, if his distribution provides a rpm-package of it and install it from there.

The name of an rpm-file includes the version-number of the program and the platform, for which it is compiled. For example, the rpm-package for alsa ("Advanced Linux Sound Architecture" (alsa) is the traditional Linux sound-system) could look like this:

alsa-1.0.27.2-3.2.1.i586.rpm

There's the name, then a version number, then "i586", that means the Intel Pentium-processor, which is 32bit, as a minimum, finally the suffix ".rpm". The build of the same source-code for a 64bit-system looks like this:

alsa-1.0.27.2-3.2.1.x86_64.rpm

Again the name, the same version number, then "x86_64", so 64bit, finally again the suffix ".rpm".
If you stick to your distribution, the correct packages will be installed. Just be aware, that there can be different builds of the same software for different platforms. Sometimes, one has to choose the right package for his system.

When you install a program on Windows, you often have to restart the computer after the installation. On Linux, using the rpm-system, you usually don't have to do that. On the other hand, you have to make changes to system-settings being root more often. Therefore they say: "Windows - reboot, Linux - be root".


12. Desktop Environments and GUI-Toolkits

When you boot into Windows, its desktop is shown. There may be some icons or tiles on it and a "start"-button with a menu.

In GNU/Linux, two programs are responsible for the graphical environment. There's a server and a client. The server is called "X-Server" or "X Window System", the project is called "X.org" (and really has a website at "http://www.x.org").
The client connects to the X-server and is called a "window manager". There are several different window managers available. Here are the names of some commonly used ones (some are parts or larger desktop environments):

KDE and Gnome are the biggest environments, the others are more lightweight window managers. I'm quite happy with LXDE at the moment.
So the graphical desktop of a Linux-system is not fixed. There are many ways it can look. Propably that's why the mascot of the company SuSE is a chameleon.
But although the look of the desktop can vary, the principles of the Linux directory-tree, the user-managment with the system-administrator "root", the configuration-files and the shell always apply to the system. That's why it's good to know about these.

It is also possible to install multiple window managers on your Linux-system. Then you can just use one of it, for example LXDE. But you can then also start applications from within LXDE, that are written for KDE or Gnome, for example.

To program window applications ("Graphical User Interfaces", GUI), socalled "widgets" are used. "Widget" is short for "window gadget" and means "element that is found in an application-window". Widgets can be buttons, entry-fields, sliders or checkboxes, for example.
Developers write code for a certain software-layer called "toolkit", which takes care of the layout of the widgets.
Windows comes with its own toolkit, which gives applications written for it the typical Windows-look.

Of course, on Linux there are different toolkits. That's why also the windows of applications look a bit different from those in Windows.
KDE is built on the toolkit "Qt". Gnome is built on a toolkit called GTK".
There is also a rather small toolkit called "Tk", which is suitable for smaller applications. Tk-applications are not that beautiful, but the widgets work. Tk's advantages are its lightweight size, and that it is relatively easy to learn. That's why it is often used in combination with script-languages like Perl (Perl/Tk) or Python (Tkinter). Originally, Tk was the toolkit for the script-language Tcl (Tcl/Tk).


13. Applications

Here are some often used applications of free software-projects.
Most of them also offer versions for Windows, so it's also possible to use free software on proprietary operating systems:

LibreOffice Office-suite (a fork of OpenOffice).
Firefox Webbrowser.
Thunderbird, kmail Email-clients.
FileZilla ftp-client.
Gimp GNU Image Manipulation Program.
Audacity Sound-editing program.
MPlayer Powerful video and music-player and -converter.
Dolphin File-manager in the style of the Windows Explorer (probably Linux only).
Midnight Commander       Graphical file-manager for the terminal in the style of the Norton Commander (Linux only).

Be sure to download these programs from trustworthy websites, where you get them for free. Look up the programs in Wikipedia and look for the original websites of the projects. Or look for download-areas on the websites of well-known computer-magazines like the German "Chip".


14. Games

As explained above, the principles of free software mean, that many people (virtually) come together and write code. But games not only require code, but also an idea for the game and maybe a story-line. And then especially artwork: Graphics and sound. It seems, the idea of many people working together doesn't work as well, when it comes to art. In fact, good art is often created by a single person, the artist. Think of Tolkien's "Lord of the Rings", the paintings of Rembrandt or the music of Beethoven.

Maybe that's the reason, why there are less games for Linux than for Windows. The concept of a single commercial company, that can hire and pay artists over a period of time, seems to work better, when it comes to creating games.

In fact there are a number of games for Linux, but not that many of these state-of-the-art 3D-blockbuster-shooters, you may know from Windows. Instead there are smaller card- or board-games like:

Then there are some 2D-games like:

And there are some old commercial games, that have become freeware or even Open Source over the years. "Beneath a Steel Sky", a point and click-adventure from 1994, is such a game. Or the 3D-online-shooter "Wolfenstein: Enemy Territory", which is still quite popular.
"ScummVM" makes it possible to play the great, old LucasArts point-and-click-adventures like "Monkey Island" 1 and 2, "Indiana Jones and the Fate of Atlantis" or "The Dig". If you own these games.
And then, there are a number of quite good emulators of other, older computer-systems. There are emulators of the Sinclair ZX Spectrum ("Fuse"), the Atari 800 XL ("atari++"), the Commodore Amiga ("fs-uae"), the Atari ST ("steem") and the Commodore C64 ("Vice") and even of old Arcade machines ("Mame"). There were lots of games for these old systems. Some of them may be free to play today.

So, you can have fun playing games on Linux for many years. Although you may not be able to play the newest fancy games for Playstation, X-Box or Windows. But, do you really need to?


15. Running Windows-Programs on Linux

In most cases, you don't need to run Windows-software on Linux. Most tasks can be accomplished by running native Linux-software.
If there's an application you really need, and it's only for Windows, I suggest running it on Windows.

However, there is a Linux-program called "wine", which makes it to some extend possible to run Windows-software on Linux. This is always experimental though. It may or may not work properly. My experiences with wine are not so good, to be honest. Sometimes I was surprised, what it can do, but in most cases, it wasn't able to just replace Windows.

Another approach would be "virtualization", that is to emulate the hardware of another PC inside Linux in a "virtual machine". A program called "VirtualBox" can do that to a certain extent.

But as I said, it's best to use native Windows, if you want to run Windows-programs. But if you use Linux and have a certain amount of knowledge about it, in most cases you don't need to.


16. Perl and Python

bash (the shell) is a command-line-interpreter (CLI). It can be used as a programming-language, so it is an interpreted language. An interpreter reads in code as text and executes it line by line, while in C/C++ source-code has to be compiled, so that executable files are created.

In 1986, a programmer and linguist called Larry Wall worked at an institution somewhere in the USA. He had to automatically process large quantaties of text, and wasn't happy with the kind of code, he had to write in C to do that. To make his task easier, he created an interpreted programming-language called "Perl" by mixing elements of C and bash (and of some cryptic Unix-commands used for data-processing called "awk", "grep" and "sed"). He released his language to the public, so that the Perl-interpreter can be found on most Linux-systems today. There is also a central database of Perl-modules for almost any programming-task on the internet called the CPAN. In general, Perl-code runs slower than similar C-code, but it's way easier for the programmer to write, and with all the modules available it can do astonishing things in only a few lines of code.

In 1991 another programmer called Guido van Rossum created a similar interpreted programming-language called "Python". It's even easier to write code in that language, and the code looks much cleaner, so it's easier to read and mantain it some months or years later. Also, the programming-style called "object orientated programming" is better implemented in Python.
The Python-interpreter is also free software and can be found on most Linux-platforms.
Version 2.7 is an often-used version. Unfortunately version 3 introduced some features, that weren't well received.

So, in case you wanted to start learning programming one day, Perl or Python would be a good choice. Traditionally, many Linux-users started programming by learning bash first, then Perl, then C.

Windows-distributions of Perl (ActivePerl, Strawberry Perl) and Python (ActivePython) are also available.


17. Mischievous Software

With Free Software, Richard Stallman wants to reach the goal, that the computer-user can keep the control over his own computing.

But in today's world, he's the exception. Everybody else tries to gain control over the computer-systems or at least over the personal data of the computer-users themselves.

Therefore it's not much of a surprise, that there are efforts to gain influence on Linux-systems.

It may be just speculation, that agencies like the NSA may have tried to install backdoors into the Linux-kernel. If there is malware, developers who can read and alter the source code may delete this kind of code from the sources again.

A new initialisation-system called "systemd", created especially by a programmer called Lennart Poettering, reached deep into the system without need. It stores log-files in a binary format instead of a text-only-format, so users can't read them as easily any more.
Though systemd may have some advantages, it caused a lot of unease in the Linux-community.

Recently it becomes more difficult for the user to take control over the mount-process of external devices. Earlier, the user could define which devices could be mounted in a file called "/etc/fstab". Today, a service called "udev" creates files for a device dynamically, when the device is plugged in. This may be convenient, if it works automatically, but if something doesn't work as expected and the user has to intervene, he is confronted with a complicated system of "policies" and automatically generated files with cryptic file-names, that disappear again, when the device is plugged out. This makes it difficult for the user to adapt his configuration-files to his hardware-devices.

Then there is a suspicious program called "Nepomuk". It's purpose is to collect and link meta-data together from different desktop-applications. Actually, I don't think, this is necessary.

Another program of that kind is "Akonadi". It centrally manages the address-book and other personal data of the user.
When I recently installed a Linux-system, it was extremely slow and unusable in the default-configuration. Only after I deinstalled Nepomuk and Akonadi, the system reached an ordinary performance.
Other programs known to have heavy CPU-load but barely any benefit for the user are the indexing-programs "Strigi" and "Baloo".

It's becoming a problem, that unfriendly programs tend to interleave themselves with the kernel or the desktop environment. The rpm-system usually makes it possible to cleanly install or deinstall programs, so the user can decide if he wants to use them or not. But the user can't get rid of these interleaved programs without losing functionality of other important programs (like KDE programs for example).

There are evil people in the world, and unfortunately they can also be found in the Linux-world. But Free Software has an advantage: If mischievous programs really become a problem, there will be people, that will build Linux-distributions that work without them. So from time to time there may be cleaning-processes, if it is necessary.


18. Conclusion

I hope, I could give an impression of what to expect, when you know Windows and try to use Linux for the first time.
Of course, there are still lots of details, covered in the many Linux-books out there.
The aim of this overview was to get you started and keep you from losing orientation in all the details.

My experience with Linux is, at first it may not do what you want, but if you keep on reading about the subject on the internet, after a while you may be able to make it do it.

And remember SuSE's motto: "Have a lot of fun..."


Creative Commons License
This work is copyright 12/12/2017 and belongs to Hauke Lubenow.
It is licensed and may be redistributed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

Email: hlubenow2 {at-symbol} gmx.net
Back