Compiling Software from Source on Linux


There's strictly no warranty for the correctness of this text. You use any of the information provided here at your own risk.


Contents:

  1. Introduction
  2. The Software Management System on OpenSuSE: rpm
  3. Compiling Minimal C/C++-Programs with gcc and g++
  4. Linking against Libraries
  5. The Most Common Build-Process: "./configure; make; make install"
  6. Missing Packages with Header-Files; "-devel"-Packages
  7. Commands for Unpacking Source Code Packages
  8. Finding the Right Source Package for Your Distribution
  9. Watching the Output of ./configure; Setting Options
  10. Reading the Compilation Documentation
  11. autogen.sh, aclocal, autoconf, automake, ldconfig, libtoolize, depmod
  12. Alternative Compilation Programs
  13. Individual Cases


1. Introduction

Like probably most people I always had problems compiling software from source on Linux. That's because, if something doesn't work as expected, you need quite some understanding of the system and of the build-process of C/C+++-programs to fix it.
But recently, I could fix more problems and therefore was able to compile more programs (which makes me a little proud). So I thought, I should write down, what I found out about the process.
I'm using OpenSuSE 13.1, which of course is not the newest distribution.


2. The Software Management System on OpenSuSE: rpm

First we have to take a look at the way, already compiled programs are managed on the system.

On OpenSuSE (the Linux-distribution that I use) compiled software is managed in socalled "rpm-packages". "rpm" is short for "Redhat package manager". These packages are also shown in the software section of the system tool "YaST2".

When rpm-packages are installed, they check in a system-wide rpm-database, if all other packages, on which they are dependend, are already installed.
If not, installation - with a command "rpm -i" from the console - is canceled, and you get an error message which packages are also required..
If you install with YaST2, it also checks the dependencied and also installs the other required programs, if you want it.

As mentioned you can use

rpm -i somepackage.rpm

(as root) to install it. And you can use

rpm -e somepackage

to deinstall it. As all information about the files are written to the rpm-database, you usually get a clean deinstallation. This is better than on Windows with "regedit.exe" and its "registration database" problems.

In some cases, you don't care about dependencies or conflicts with other packages, you just want to poke your package into the system, no matter what. Fortunately you can do that (as root) with:

rpm -i somepackage.rpm --nodeps --force

Notice, that you have to call rpm commands about already installed packages without the ".rpm"-suffix. To find out, if a certain package is installed on the system and what its exact name is, you can do:

rpm -qa | grep -i somepackage

You can then get general information about the package with

rpm -qi somepackage

And you can get a list of the installed files with:

rpm -ql somepackage

If you want to use "-qi" and "-ql" on an uninstalled rpm-package lying as a file in some directory on your harddisk, you have to add a "p":

rpm -qpi somepackage.rpm
rpm -qpl somepackage.rpm

Notice, that you can browse the contents of rpm-files by using the Midnight Commander. As this is sometimes necessary, you should learn how to do it.

So, most OpenSuSE-software comes in these rpm-packages, and YaST2 and the rpm-commands above are enough to manage them. If a rpm-package of the software is available in a trustworthy repository (especially in the official OpenSuSE-repositories), and if it works, you should always use it. You then don't need to care about compiling and can just happily use your software.

This text deals with software that isn't available in such packages (or that doesn't work as well from older rpms as a newer version, that has been released as source in the mean-time).

Then the aim is to get the sources (usually provided in a ".tar.gz"-file") and create a working ".rpm"-file from them. As mentioned, this can be a complex and difficult process. You'll need the rpm-commands mentioned above then to check the results or to quickly install additional software, the program needs.


3. Compiling Minimal C/C++-Programs with gcc and g++

When you have a minimal "Hello World"-program "hello.c" in C,

#include <stdio.h>

int main() {
    puts("Hello World.");
    return 0;
}

you can compile it with

gcc -o hello hello.c

and get an executable called "hello".

The same works for a minimal "Hello World"-program "hello.cpp" in C++,

#include <iostream>

using namespace std;

int main() {
    cout << "Hello World." << endl;
    return 0;
}

which can be compiled with:

g++ -o hello hello.cpp


4. Linking against Libraries

Almost every C/C++-program makes use of libaries, that have to be introduced to the compiler. A small program "printsin.c" like this:

#include <stdio.h>
#include <math.h>

int main(void)
{
    printf("%f\n", sin(45));
    return 0;
}

can only be compiled, if the gcc-command knows which library to use and where to find it. The gcc-command also has to know, where the header-files for the "#include"-commands inside the program can be found.
"/usr/lib" is a standard directory to search for libraries, "/usr/include" a standard directory for header-files. Other directories for libraries would have to be passed to gcc with the "-L"-option.
Here, it just has to be told to use the library "libm.so" in "/usr/lib" using the "-l"-option. That would be:

gcc -o printsin printsin.c -lm

Notice, there isn't a space character between "-l" and the library name. Also notice, that "lib" and the ".so"-suffix are not mentioned, so it's just "-lm", not "-llibm.so".

The header-files mentioned above, which can be usually found in "/usr/include" have the suffix ".h". They contain declarations of the functions of the library. By using the "#include"-directive, these function-declarations become part of the C-program. That way, the functions of the library can be used in the C-program.


5. The Most Common Build-Process: "./configure; make; make install"

gcc- or g++-commands can become rather long and complicated, when compiling real-life programs.
That's why another procedure has been created, which is the most common, when compiling software on Linux.

The sources of Linux programs usually come in a compressed archive-file, often in the format ".tar.gz". This "tar-ball" can be unpacked with

tar xzvf somesourcecode.tar.gz

A subdirectory of the program should appear. Change to that directory using "cd". In the directory, there is usually a bash-script called "configure".

When executed, this bash-script checks the configuration of the system.
It is usually executed as "./configure", to make sure, it is the "configure" inside the current working directory (not a "configure" somewhere else on the system). In the language of the shell, the dot "." represents the current working directory, the "/" is a directory separator.
"configure" writes the results of its checks to a file called "Makefile".
Notice the capital letter at the beginning. As Linux is case-sensitive, such a name is quite unusual for a filename.
The content of a minimal Makefile for the "Hello World"-program above would look like this:

hello : hello.c
	cc -o hello hello.c

When the Makefile is created, the user can run the command "make".
"make" reads in "Makefile" and creates the necessary gcc/g++-commands from it, including all complicated options. It then also runs these commands.

If libraries needed for the build process can not be found or used, or if there are bugs in the source-code of the program, either "configure" or "make" will fail. They will then cancel the configuration- or build-process and print out an error message instead. It's then up to the user to fix the problem. Sometimes, this is possible, sometimes it isn't, depending on how severe the problem is and how skilled the user is.

If "make" is successful, all executables and other files of the application have been created. But they're still in the subdirectory of the build-process.

The user can then run "make install" to copy the created files into the system. Copying files into the system requires "root"-privileges, so before running "make install" the user would have to become "root", using the "su"-command.

Unfortunately, running "make install" bypasses the rpm-database. So the application and its programs aren't recognized by rpm or YaST, and it may be difficult to remove them from the system again later.
So instead of running "make install", I used to run a special program called "checkinstall". It tried to take all executables and other files of the application and automatically build a "rpm"-package from them. Unfortunately, in some cases, users have reported, that it could be dangerous to the system, even break it. That's why I decided, not to use "checkinstall" any more.
But to be honest, "make install" bypassing the rpm-database hasn't been such a great problem on my home user systems in the past.
This command shows, what installations "make install" does (it runs "make install" without installing anything):

make -n install

It's useful to pipe the output of this command into a file, of course.

"configure" and "make" can also be executed in one line like this:

./configure; make

Then the first command is executed, after that the second. An alternative is:

./configure && make

Then the second command is only executed, if the first command has finished successfully (&& is a logical AND).

This is the general build process. In the following, I describe a few cases of what can go wrong and how to fix it.


6. Missing Packages with Header-Files; "-devel"-Packages

Let's say, you want to compile an application that uses the SDL-library ("Simple Direct Layer", often used by games). You get an error, that the SDL-library is missing. You check with

rpm -qa | grep -i sdl

but you find a package "libSDL-...". So the library should be there.
In this situation you should check, if the corresponding "-devel"-package is also installed. So look out for "libSDL-devel...".

As you can see with

rpm -ql libSDL-devel | less

the "-devel"-packages often contain the header files for the program (suffix ".h"), which usually can be found below "/usr/include".

In general, if the build process says, a program or a library is missing, first check in YaST, if there is a rpm-package for that program or library in the available repositories. It's quicker and easier to install missing rpms from the repositories than to search for the project page on the internet and to try to build the missing program or library yourself.
And, as mentioned, also check for the "-devel"-rpm-packages of the missing program or library and install them too.


7. Commands for Unpacking Source Code Packages


8. Finding the Right Source Package for Your Distribution

If you use the newest version of your distribution or if you keep updating it (which uses lots of energy and produces huge amounts of CO2, considering that billions of computers all over the world do that 24/7 without necessity), you should try to compile the most recent source code of a program.

If you use an older distribution (like me, running OpenSuSE 13.1 in 2019), it may not be possible to compile the most recent source code, because it may depend on versions of certain libraries that are newer than the ones on your system. So you may have to look for older versions of a program's source code.

An essential library would be "glibc" (on my system that's "/lib/libc.so.6". You can check, as you know by now, with "rpm -ql glibc"). It provides the fundamental functions for using C on a Linux system. So it's kind of the "backbone" of a distribution. Without "glibc", nothing would work.
If the compilation of a program requires another version of glibc than the one on your system, you probably won't be able to compile or use the program. And don't let any compile-process ever try to change the existing version of the glibc on your system.

Other libraries that may cause problems are the ones for the GUI-toolkits, especially GTK and Qt. These toolkits define what the windows on your desktop look like. It may be difficult to update them. On my OpenSuSE 13.1 for example, I have Qt4. If a program requires Qt5, I probably won't compile it or look for an older version of the source code, which only requires Qt4.


9. Watching the Output of ./configure; Setting Options

Sometimes, "./configure" runs successfully, but prints warnings during the process. That shouldn't happen. Read the warnings carefully and decide, whether you can ignore the warning or if you have to try to fix the problem, the warning is mentioning.
Maybe you have to install additional libraries (and their "-devel"-packages) first.

Often, "./configure" also provides special options for compiling the program. You can take a look at them with

./configure --help | less

Sometimes the install prefix is set to "/usr/local", so the program files will be installed to

/usr/local/someprogram
/usr/local/lib/someprogram
/usr/local/include/someprogram

If you want "/usr/lib", "/usr/include" instead, you have to set the prefix to the directory "/usr" by running "./configure" with

./configure --prefix='/usr'

That may not be the only option of "./configure", you'll want to change.

"qtractor" up to version 0.9.2 for example (a music software) provides an option to compile for Qt4, called "--enable-qt4". As Qt4 is my latest version of Qt at the moment, this is quite an essential option for me, if I want to compile "qtractor" for my system.


10. Reading the Compilation Documentation

The source files usually come with notes on the compilation process written by the developers of the program. When problems occur, you should read these files. They are usually called "README" or "INSTALL".


11. autogen.sh, aclocal, autoconf, automake, ldconfig, libtoolize, depmod

In some cases it may be necessary to run other system programs before going for "./configure". For example, there may be a script "autogen.sh" in the compilation directory, which has to be executed first.
Other programs in this category would be:

aclocal
autoconf
autoreconf
automake
autoheader

Unfortunately, I don't really know (yet), what they are doing.
For example, sometimes the "Makefile" isn't delivered by the tar-ball itself, it is created by "automake" from input-files.
The following system commands have something to do with the library system:

ldconfig
libtoolize

"ldconfig" seems to rebuild the library system. So if a program doesn't find the necessary libraries, this command may help.

Someone wrote, this sometimes helps to create compilation files for libraries. Worked for "libdvread" and such:

libtoolize --force
aclocal
autoheader
automake --force-missing --add-missing
autoconf
./configure --prefix=/usr

"depmod" seems to rebuild the kernel module system, which has to be done after compiling a new module (use "lsmod" to view the kernel module list, "modprobe" to insert a module, and "rmmod" to remove a module; all being root).

These commands are still a bit of a mystery to me, but I wanted to mention them.


12. Alternative Compilation Programs

"./configure; make; make install (or checkinstall)" is the most common installation process. Some programs require other programs for compilation though.


13. Individual Cases

Polyphone 1.8, which is an editor for ".sf2" and ".sfz" music sampler files (soundfonts):
When running "qmake", and then "make", it complained about "qcustomplot.h" not being found. The sources of "qcustomplot" just provided the ".h"-file and a ".cpp"-file. I then found, that when compiling a static library for Qt, you have to use a ".pro"-file. I also found, that the Polyphone sources already included the "qcustomplot" sources, and there also was a ".pro"-file. Activating the line

DEFINES += USE_LOCAL_QCUSTOMPLOT

in the file "polyphone.pro" made it finally possible to compile with "make".



Email: hlubenow2 {at-symbol} gmx.net
Back