There's strictly no warranty for the correctness of this text. You use any of the information provided here at your own risk. I can not be held responsible, if you damage your system by using any of the information provided her.
The music programs, this website is about, try to bring the functionality of a whole music studio into the Linux computer. That is quite an ambitious goal.
Let's take a look at what a traditional music studio is about.
Such a studio is a place of one or more rooms filled with all kinds of musical equipment, for example instruments, microphones, effect devices, ampilifiers, a mixing console and recording devices. People like sound engineers work there.
Traditional pop music bands like for example the Beatles used a bass guitar (Paul), a rhythmic guitar (John), a lead guitar (George), a drum kit (Ringo) and vocals (John, Paul, and the others too).
AC/DC still plays in a similar Rock setup today.
A drum kit consists of a bass drum, a snare drum, a hi-hat (which can be played closed and opened), and maybe some toms (tom-tom drums) and cymbals.
In the studio, the electric instruments and also the microphones for the drums and vocals are connected to other devices like an effects rack. There, effects like especially chorus, delay, reverb, distortion and compressor can be added. All devices are connected to a huge mixing console, where the volumes can be mixed in a reasonable way. The output of the mixing console is then sent to an audio recorder.
Of course, you were not limited to the classical band setup. You could also use a flute (think of "Jethro Tull"), a trumpet, a harmonica, a violin, a saxophone for your production, or maybe some percussion.
Maybe there was also a keyboard player playing an organ (think of "The Doors"), a grand piano (Elton John) or an electric piano like the Fender Rhodes or the Wurlitzer (Supertramp).
In the early 1970s, the Minimoog, a monophonic analogue synthesizer, entered the studios. It had three oscillators, which produced for example a sawtooth wave. This wave was then sent into a low-pass filter, and then into an "amplifier", which actually didn't raise, but decreased the volume of the sound. Envelopes for filter and amplifier could be set, so the sound could be either long and streched (like the sound of a string instrument) or short and plucked (like the sound of a guitar). When the filter of the synthesizer was slowly closed or opened, the sound developed characteristically over time.
The Minimoog, being an electrical instrument, could be enhanced by effects and recorded just like an electric guitar.
There also were (rather expensive) analogue sequencers, that could be connected to the synthesizer to make it play simple programmed sequences.
A few years later, there were also big polyphonic analogue synthesizers like the Prophet 5, the OB-X and the Jupiter 8.
In 1983, the format "Midi" ("Musical Instrument Digital Interface") was introduced. Computers could then send data to several midi capable synthesizers and drum machines at once, polyphonically on 16 different channels.
Also in 1983, the DX7 was available, a midi-capable digital synthesizer, that used a different technique called "frequency modulation" (FM) to create its sound.
Also the first samplers like the "Fairlight CMI" or the "Emulator II" appeared, that made it possible to play digitally recorded sound samples across the keyboard. Although that's not much of a problem for todays computers or even smartphones, back then, samplers were incredibly expensive.
When in 1980 Depeche Mode exchanged their guitars for synthesizers and drum machines, they still kept the rock band setup described above. There still was a bass (usually a Moog played by Andrew Fletcher), more synthesizers for rhythm and lead sounds, drums from a drum machine (Korg KR-55 for example) and vocals (by Dave Gahan, sometimes supported by the others).
Even when other sounds were used to replace instruments of a drum kit, the new sound still had the function of the original drum sound (the function of a bass drum, snare drum and so on).
So, what is needed in a computer to replace a whole music studio? Well, there must be:
On Windows, there are integrated programs called "Digital Audio Workstations" (DAW) to provide all the requirements mentioned. The most popular DAWs would be Cubase, Ableton, Reason, Pro Tools and FL Studio.
There are also a few commercial DAWs running on Linux, for example Bitwig Studio, Reaper and Renoise.
And there are a few open-source projects aiming at a DAW, like
(and trackers like "Milky Tracker"). They may be worth a look, but can't compete with the commercial solutions yet.
Qtractor has become rather good recently, though.
But there are also several stand-alone programs, that can be connected together by a program called "jack", to acclompish the wanted results.
It's worth to get to know about "jack", because that's the Linux way of doing things, and even integrated solutions on Linux may rely on it in the background.
The biggest obstacle to creating good music is not producing though, but songwriting. In the last centuries, there have been only so few really good composers, that they're known by name (Bach, Mozart, Beethoven, Chopin, Debussy, Lennon/McCartney, and so on). So composing / songwriting is the biggest problem. If you can do that and therefore have a good song, you're already half-way through. Be it using Linux, Windows, Mac or no computer at all.
Ideally, a composer would hear the music in his head, before he even started playing it on an instrument. But it seems, many musicians are inspired by the sounds, certain (especially electronical) instruments make.
Some composers always carry a small audio recorder, to quickly hum and record a melody, when it comes to their mind.
Music production on Linux starts with a program called "jack".
And it's not even obvious what it does. DAW programs on Windows for example also include its functionality, but hide it in their code. But as we know, on Linux, the system is open and visible to the user. And that's why you get in contact with jack. This is, what it's about:
If you want to create music, you'll have to connect a lot of devices.
Think of a music studio in real life, where you connect an electric guitar to an amplifier. Or, if you want to record the result, connect the guitar to a huge mixing console, that itself is connected to amplifiers as well as a recorder.
Inside a Linux box, these kind of connections (audio and midi) are created inside a program called "jack".
There's a daemon-process called "jackd", which can be controlled by a program called:
So after starting qjackctl, you can start jack and make the connections you need.
jack is also supposed to take care of the problem of audio latency on general purpose computer operating systems.
jack uses two different systems for midi: jack-midi and alsa-midi.
Most programs and devices use alsa-midi, but some (like the "Calf Studio Gear" suite for example) use jack-midi. To connect alsa-midi-input to jack-midi, there's a "bridge" program
from the package "a2jmidid...rpm". When you start it, an additional "a2j"-client shows up in the jack-midi screen of "qjackctl", that holds the alsa-midi-outputs as sub-clients.
You can also use the daemon "a2jmidid" to create such a bridge.
Up to now, I'm not trying to explain how to use qjackctl in detail. It's quite challenging, but there are other sites dealing with that topic.
fluidsynth is a software sampler, that can handle sample libraries called "soundfonts" stored in the file-format ".sf2".
"qsynth" is a graphical frontend for fluidsynth.
When you start qsynth, it shows up in jack (qjackctl). You can connect the audio to the output of the soundcard there for example.
In qsynth, you should load a ".sf2"-file with samples that fluidsynth will use. "Vintage_Dreams_Waves_v2.sf2" is a nice library of synth-sounds for example, that can be used.
fluidsynth/qsynth also provides a built-in reverb and chorus effect.
Notice, that you can use a sampler also for drums. That is, a sampler (like qsynth) connected to a sequencer (like seq24) is basically a drum-machine.
You may find soundfonts containing samples of classic drum-machines like the "808" (Roland TR-808) or the Linn-Drum on the internet.
Unfortunately, qsynth/fluidsynth can't handle ".sfz"-files. Use qsampler/Linuxsampler to process these kind of files.
There's another software sampler for Linux called "Linuxsampler". Its frontend is called "qsampler". It can also handle ".sf2"-soundfont-files and socalled ".gig"-files (once used by a program called "Gigasampler/GigaStudio"). On the website of Linuxsampler, there's a big ".gig"-file (uncompressed 940 MB) of a sampled Yamaha grand piano. So you can easily get a very nice piano-sound in your music this way.
To create ".gig"-files from ".wav"-files, there is an editor called "gigedit", which unfortunately isn't very stable yet. There are also some tools like "gigdump" or "gigextract" to deal with existing ".gig"-files. They can be found in the package "libgig-tools...rpm" (which is not obvious).
Linuxsampler can also handle ".sfz"-files. A ".sfz"-file is just a text file, containing information, where to find one or more ".wav"-files and how to spread their sound across the keyboard. That actually works very well.
If you have a ".wav"-file called "mywav.wav" in a directory, you can create a file "mysfz.sfz" in the same directory containing just the following line:
<region> sample=mywav.wav lokey=0 hikey=127 pitch_keycenter=48 loop_mode=one_shot
You can then load the ".sfz"-file into qsampler (using "Edit/Add Channel" (Ctrl+a)), and you're ready to play your ".wav"-file in qsampler/Linuxsampler.
If you want the sound just on a single key (like it's often the case with drum sounds), you can also write:
<region> sample=mywav.wav key=48 loop_mode=one_shot
You also can define groups of samples, so that the settings you define for the group affect every sample in the group. For example, if you have several samples from an analogue synth with a long filter decay time, but want the samples to stop playing quickly, when you release a key, you can set a shorter release-time for the samples. This can be done like this:
<group> ampeg_release=0.2 <region> sample=36.wav lokey=36 hikey=37 pitch_keycenter=36 <region> sample=38.wav lokey=38 hikey=39 pitch_keycenter=38 <region> sample=40.wav lokey=40 hikey=40 pitch_keycenter=40
Notice, the option "loop_mode=one_shot" isn't used here, because that would make the samples play to their ends every time, preventing the short release time (of the amplifier envelope) from taking effect.
The ".sfz"-format let's you write many more things. But the lines above are the basic ones to get you started.
If you connect a class-compliant midi-keyboard by USB, it also shows up in jack/qjackctl, in the tab "alsa (alsa-midi)".
You can connect it to qsynth there.
If you don't have a hardware midi-keyboard, you can install the program "vmpk" (Virtual Midi Piano Keyboard). Then virtual keys are shown on the screen, and vmpk can also be connected to qsynth in qjackctl.
So, if vmpk is connected to qsynth (in qjackctl), a sf2-library is loaded into qsynth, and qsynth's audio is connected to the "system"-output of the soundcard (in qjackctl), you should hear sound, when you click on a virtual key of vmpk. Before clicking, make sure, your sound-settings concerning volume are not too loud.
With "jack_rec", you can record audio, that is going through jack, to a ".wav"-file.
In the connections in "qjackctl", look for the audio-output, you'd like to capture: If qsynth is running, its audio-outputs should be shown as "qsynth/l_00" and "qsynth/r_00".
The corresponding "jack_rec"-command would then be:
jack_rec -f output.wav "qsynth:l_00" "qsynth:r_00" -b16 -d10000
This is pretty cool (and rather powerful).
seq24 is a small midi-sequencer, mainly to create small patterns and loops.
Although seq24's patterns can be grouped into a song lateron (using the "song editor" window), to create a whole song, other midi sequencers like especially "Rosegarden " are more appropriate.
To use seq24 as a (alsa-) midi-client in jack, edit the file "/home/user/.seq24rc". Look for the section
and set the value to "1" there (instead of "0"). Then jack recognizes seq24 as a stand-alone midi-client (which is important). Also Dexed recognizes seq24 as a midi-input then.
At startup, seq24 shows different slots for patterns. Right-click on a slot to create a new pattern.
In the pattern-edit window, you see the output, seq24's midi is sent to. If you choose "Midi Through Port-0", the output is sent to the "Midi through" output inside the alsa-midi-connections in qjackctl. So that way you can send seq24's output to qsynth for example.
In the pattern-edit window, you can right-click on the piano-roll-part of the window. The cursor turns into a pencil then. By left-clicking you can set notes then.
If you set mouse-mode to "fruity" in the options, the mouse cursor turns automatically into a pencil in the edit window (much better, if you ask me).
Right below the grid for the notes, there's one more line, a bit separated. It is used to insert other midi-events than notes. For example, you can set a program-change event at the beginning of the pattern there.
Below this line there's an area with vertical lines, one for each event. Values of 0 to 127 can be set there, by right-clicking on the vertical lines. (If the vertical line referrs to a note event, its value indicates the velocity of the note.)
Use the vertical line of your program-change event to select the instrument of qsynth, you want to use (otherwise qsynth would use instrument 0 of the selected library, which may not always be what you want).
See the file "/usr/share/doc/packages/seq24/SEQ24" for more instructions how to use the sequencer.
Or here's a nice tutorial on Youtube.
Actually, when I recently saw a three layer step-sequencer, that was built into the Fairlight-inspired commercial Windows-plugin "UVI Darklight", I noticed, that that kind of multi-layer step-sequencer could easily be realized with seq24. Because that's exactly, what seq24 is about: Multiple short patterns, that are looped over and over again at the same time.
"Rosegarden" is a multi-track midi sequencer to create whole songs, similar to those found in DAW-programs.
I described the functions of this program on another page, which can be found here.
Dexed is probably the best free emulation of the Yamaha DX7 FM-synthesizer out there. It also can read ".syx"-files with DX7-program patches, which gives you access to thousands of DX7-sounds created since the release of the synthesizer in 1983.
On my SuSE 13.1 (32bit) system, I was able to compile Dexed, version 0.9.4. In ".../dexed-master/Builds/Linux/", there is a Makefile. So typing "make" is possible. If the compiler complains about a line "-std=c++1z", edit the "Makefile" and change "-std=c++1z" to "-std=c++11" inside (this sets the used C++-dialect).
To enable jack-support, you have to go to the directory ".../dexed-master/JuceLibraryCode/" and edit the file "AppConfig.h" there: Change "JUCE_JACK 0" to "JUCE_JACK 1" like this:
#ifndef JUCE_JACK #define JUCE_JACK 1 #endif
When you're ready, go back to the Makefile and type "make". In the subdirectory "build", a working executable "Dexed" with jack-support should have appeared. Go to "options" in the top-left corner of Dexed's window and select "Audio/Midi-settings" to set useful options for jack and seq24.
Somehow Dexed creates a jack-audio-client (called "JUCEJack"), but it doesn't create a midi-client in jack. Midi-input has to be confured inside the options of Dexed. There, jack's alsa-midi-outputs, to receive data from, are listed.
I have to admit, it's pretty cool, to have Dexed working on a Linux-box.
Another (slightly older and a bit less accurate) DX7-emulation which is known to work on Linux, would be Sean Bolton's "Hexter".
ZynAddSubFX (or "zyn" for short) is a virtual analog synthesizer program. It's not the newest program of this kind, so it doesn't make use of all processor-power available today. Therefore, today there are commercial programs, that emulate analog synthesizers even more accurately (like U-He "Diva", U-He "RePro", TAL-U-No-LX or "Sonicprojects OP-X Pro II" (which I have)). But ZynAddSubFX still doesn't sound bad. And it's free.
Just connect its audio and midi in qjackctl.
Check out the sound "Pads / 'Synth Pad 3'" for example. I like it a lot.
Notice, that ZynAddSubFX also comes with its own effects-section. Reverb, echo, chorus, phaser, it's all there (use the section "Insertion Effects" and insert the selected effect to"Master out").
WhySynth is another virtual instrument. It is a DSSI-plugin and was written by Sean Bolton. Try the presets, WhySynth sounds great! It also features one internal effect ("plate reverb" for example).
If you want to use WhySynth as a stand-alone-program, start it with
(or wherever else "whysynth.so" is located after the installation).
Sean Bolton is also the author of the DX7-emulation called "Hexter", which can be started in a similar way. Though today, most people may prefer "Dexed", because it is so accurate, "Hexter" is still a nice instrument and worth a try too.
U-He is a commercial company, that produces high-quality VST-plugins. Some of these plugins have even been used in Hollywood film-productions.
Though most people may use the plugins on Windows and Mac, U-He also offers Linux-versions of most of their products.
U-He "Tyrell" is a virtual analog soft-synth, that has been made available as Freeware.
There's also a Linux-version of "Tyrell". It is a native Linux VSTi for 64 bit-, and also for 32 bit computers (like mine).
"Tyrell" can be downloaded on this site (direct link here). There is also a manual for download.
QTractor can be used as a native Linux VST-host for "Tyrell". It seems, there are not so many other reliable native Linux VST-hosts available at the moment.
Copy the subdirectory "TyrellN6" to "/usr/lib/vst" and add this directory to the ones, QTractor shall scan for VST-plugins (using "View/Options/Plugins/Paths" in QTractor).
Then open a new midi-track in QTractor and select "Tyrell" as a VST plugin for that midi track.
In qjackctl, connect your keyboard's midi output to Qtractor, and QTractor's audio output to the system's output. Then you should be able to hear Tyrell.
Notice, that though ZynAddSubFX, WhySynth and Tyrell basically accomplish the same task (they produce synthesizer-sounds), each program sounds quite different and has its own character.
"jack-rack" is a program that kind of emulates a multi-effects-rack, which can be found in a music studio. When you start "jack-rack", an audio-client appears in the "connections"-section of "qjackctl".
So you can connect an audio source like qsynth or Dexed to jack-rack, then connect the output of jack-rack to the soundcard's audio output.
In the window of jack-rack, you can select a huge number of effects (written as socalled LADSPA-plugins).
Unfortunately, some of these are not written well, so they may produce terribly loud noise or crash altogether.
Here are some effects for jack-rack though, that seem to do what they're supposed to and do it well:
Here are the websites of the authors: TAP, Zita/FIL, SC4, C* (caps).
The effects are stored in "/usr/lib/ladspa" as ".so"-files. jack-rack searches this directory for them. You may want to clean up a bit in there, maybe move unused effects to a directory "/usr/lib/ladspa_unused". This is how the content of my directory "/usr/lib/ladspa" looks like:
caps.so filters.so sc4_1882.so sc4m_1916.so tap_autopan.so tap_chorusflanger.so tap_echo.so tap_tubewarmth.so tap_vibrato.so zita-reverbs.so
Unfortunately, development of jack-rack has stopped in 2007, and it is still not perfectly stable. I wish, someone would develop it further, especially make it bug-free.
There are newer plugin-hosts called "Carla" or "Calf Studio Gear", but I like the simplicity of jack-rack.
ng-jackspa is similar to jack-rack, but hosts just a single LADSPA-plugin. Maybe it's more stable than jack-rack (we'll see).
It likes the LADPA_PATH-variable to be set. This can be done with:
To use ng-jackspa, when jack is running, you have to name the plugin, you want to use (found in the LADSPA-directory), and provide the socalled "unique ID" of the plugin. For example:
gjackspa tap_echo tap_stereo_echo
When the LADSPA_PATH is set, the unique ID of the plugins can be found with the command
which is part of the ladspa-rpm-package.
"Calf Studio Gear" is a suite of effect plugins and virtual instruments. Most effects are in the format "LV2" (which is the successor to "LADSPA").
The plugins can be used in other LV2-hosts like Qtractor.
But "Calf Studio Gear" also comes with is own plugin host called "calfjackhost". With it, you can set up an effects rack, similar to "jack-rack" (but more modern).
On my system, the fonts of "calfjackhost" were much too small. Fortunately, the font-sizes can be configured. Look for lines including the string "font_name" in the file:
You may need to search with the shell-command "rpm -ql calf | grep calf.rc", where the file "calf.rc" has been installed on your system.
When the first plugin is added to the rack of "calfjackhost", a jack-audio-client and a jack-midi-client are created. By default, both are called "Calf Studio Gear". For each plugin inside calfjackhost, there's a special window to make connections too.
But when a plugin is added to the rack of calfjackhost, it is also listed as a sub-client in the jack-clients list. So you can also use jack to make all connections.
"Calf Studio Gear" seems to be stable and sounds great. Well done!
hydrogen is a quite powerful drum machine. It seems it works only stand-alone. But it can be connected to other programs using jack (and also supports jack-transport).
If I remember correctly, it was better to compile hydrogen from its sources. And compilation was a bit tricky, so I was really happy, when it finally worked.
hydrogen lets you edit drum patterns. Several patterns can be linked together to a song.
hydrogen comes with several drum kits, but you can also create your own drum kits by loading in custom ".wav"-files (like samples of the TR-808 or the LinnDrum found on the internet for example). Drum kits made of several ".wav"-files can easily be saved as a single file and reloaded again later.
Volumes and pitches can be set for each drum instrument individually. There's even a filter for each instrument.
The velocity of each note can also be set in the drum editor. That way you can create drum accents.
hydrogen also supports LADSPA effects. Up to now I just piped the output of hydrogen to the input of jack-rack to add effects. That worked just fine and sounded surprisingly good.
Actually, I'm quite happy with hydrogen at the moment.
Qtractor is a real DAW-program ("Digital Audio Workstation"), that is close to reaching version 1.0 by now. It seems, in the last years it has become quite a marvellous piece of software.
Basically it uses Qt5 for its GUI, but version 0.9.2 and below still support Qt4, if you compile using "./configure --prefix='/usr' --enable-qt4".
Qtractor also relies on jack, so most of the things mentioned above are still of importance.
Qtractor can handle multiple tracks of midi and audio. So there's a piano roll midi editor for whole songs.
Qtractor supports effect plugins (LADSPA and LV2) and virtual instruments (DSSI and LV2). It even supports native Linux VST plugins, for example Dexed is available in this format.
As Qtractor also appears in the jack-connections, you can also integrate stand-alone Linux audio programs (like "hydrogen"). Alsa midi can be sent out from Qtractor tracks through jack. And the audio output of the external program can be connected to the jack audio input of Qtractor. So you can record the audio output of the external program into an audio track in Qtractor. That's quit awesome.
There's a mixer, and the different tracks can be mixed down into a ".wav"-file.
"mididings" is a midi router, -filter and -processor. It is a Python-based program that can be used to manipulate incoming midi-data in various ways.
In qjackctl, it creates a jack alsa midi client, so you can connect a midi keyboard to its input, and send manipulated midi data from its output to an instrument.
For example, mididings can be used to change the velocity curve of the data, that is coming from a midi keyboard.
If a midi keyboard is velocity sensitive (which most of today's midi keyboards are), it sends lower midi velocity values when a key is pressed softer, and higher values, when a key is pressed harder (up to a value of 127). How hard you have to press a key to reach a high value (or even 127, the highest) depends on a socalled "velocity curve" the manufacturer of the midi keyboard has built into it. If you're unlucky, you'll find, that none of the built-in velocity curves of your midi keyboard is suitable to control a certain piano sound or DX7-sounds correctly. The sound then may either be too quiet or too loud in relation to how hard you press the keys on the midi keyboard. This problem can be solved using mididings.
When mididings is installed, to raise a velocity curve, you have to to write a small script, that looks like this:
#!/usr/bin/python # coding: utf-8 # velocity.py from mididings import * config( client_name='velocity-changer', ) mypatch = ( Velocity(curve=1.5) >> Sanitize() ) run(mypatch)
Notice, that in Python, the indentations are of importance. You have to keep them like shown, or the script won't work.
When you start "velocity.py", it creates an alsa-midi client in jack called "velocity-changer", to which you can connect your midi keyboard and instrument. Then the velocity values of the midi keyboard are raised according to the script.
"Velocity(curve=1)" passes the midi values unchanged, "Velocity(curve=2)" raises them to maximum, so "Velocity(curve=1.5)" is a reasonable value to raise an incoming velocity curve, that is too low.
Sanitize() makes sure, sent midi values don't exceed values of 127.
Other useful "modifiers" of midings would be:
The documentation of mididings would be here, more detailed explanations of the modifiers would be here.
I was able to compile the latest version of wine, which was "wine 4.0.2". And also "dssi-vst". The later installs an executable "vsthost", which expects the file-name of a Windows VST-dll as an argument. It is then often somehow able to host the Windows VST as a jack client on Linux.
At least this worked (to my amazement) with the VST "Sonicprojects OP-X Pro II" which I once bought (I can recommend it, it's rather good).
When you play two or more instruments, like let's say ZynAddSubFX and hydrogen, you probably want to hear and record both of them at once.
Actually, jack already does mixing, so you can hear both instruments, when you connect them to the system's output.
Unfortunately you can't record with "jack_rec" from the system's output. There are also "system capture" clients in jack, but they don't provide the sound of the system's output, but represent the microphone input of the soundcard.
So you need some kind of program, that creates clients on jack's input side as well as on its output side. That's just what mixers for jack do.
#!/usr/bin/perl use warnings; use strict; use Audio::JackMiniMix; my $ADDRESS = "osc.udp://localhost:10940/"; my $a = Audio::JackMiniMix->new($ADDRESS); print "\n"; print $a->ping(); print "\n"; print $a->get_channel_label(1); print "\n"; print $a->get_url(); print "\n"; print $a->channel_count(); print "\n"; print $a->set_channel_gain(2, 2); print "\n"; print $a->get_channel_gain(2); print "\n\n";
The channel count should be 4 (for four sub-clients on the right side of qjackctl, called "in1_left", "in1_right" and so on).
The "->set_channel_gain(2, 2);" should have the effect, that sound coming in to "in2_left" and "in2_right" can finally be heard.
So, basically, this mixer works. I like it. I think, I'll use it.
It is possible to create the needed jack-connections using bash commands. If you are familiar with bash-scripts, this may give you more control than the "patch-bay" inside qjackctl, which also lets you manage and save jack-connections.
shows the names of the audio- and jack-midi-connections (the alsa-midi-connections are not shown this way, use "aconnect -lio" for that). Use for example
jack_connect ZynAddSubFX:out_1 system:playback_1
to create an audio-connection from ZynAddSubFX to the system audio output (that usually is the soundcard output). As mentioned already
is used to show the alsa-midi-connections. "aconnect" is part of the package "alsa-utils". Use it similar to "jack_connect" to create these kind of connections.
All jack-alsa-midi-connections can be removed with:
All jack-audio-connections can be removed with:
"jmess" can also make a snapshot of all current jack-connections (without jack-alsa it seems) and save it to a XML-file, from where the setup can be restored later.
The command to start the jack-daemon correctly including all options should be found in the file
Use the tool "amidi".
So now you have
which can all be connected with "qjackctl". And you're able to record the audio to a "wav"-file (with "jack-rec").
Maybe it's time to make some music.