Opening the Settings would unconditionally try to start the JACK server
if it was not already running, even if the user was using an audio
backend other than JACK. When it failed to start the JACK server, it
would retry a few times which made opening the Settings take several
seconds. For me, it also broke my whole system's audio, requiring me
to restart pipewire.
This PR fixes that problem by passing the JackNoStartServer option to
jack_client_open() in the setupWidget used by the Settings.
AudioJack::initJackClient() continues to use the JackNullOption option,
so like before, it will still attempt to start the JACK server if you select
the JACK backend and restart LMMS.
I also moved some static free functions into a nearby anonymous
namespace, and I removed the server name argument from the
jack_client_open() calls since that variadic argument is only supposed
to be passed if the JackServerName option is passed.
Improves the tap tempo algorithm and usage, as well refactoring the code for better maintainability.
---------
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
This reworks the auto-quit feature by introducing a new AudioBuffer class which keeps track of which channels are currently silent as audio flows through the effects chain.
When track channels going into an effect's input are not marked as quiet, it is assumed a signal is present and the plugin needs to wake up if it is asleep due to auto-quit. After a plugin processes a buffer, the silence status is updated.
When the auto-quit setting is disabled (that is, when effects are always kept running), effects are always assumed to have input noise (a non-quiet signal present at the plugin inputs), which should result in the same behavior as before.
Benefits:
- The auto-quit system now closely follows how it is supposed to function by only waking plugins which have non-zero input rather than waking all plugins at once whenever an instrument plays a note or a sample track plays. This granularity better fits multi-channel plugins and pin connector routing where not all plugin inputs are connected to the same track channels. This means a sleeping plugin whose inputs are connected to channels 3/4 would not need to wake up if a signal is only present on channels 1/2.
- Silencing channels that are already known to be silent is a no-op
- Calculating the absolute peak sample value for a channel already known to be silent is a no-op
- The silence flags also could be useful for other purposes, such as adding visual indicators to represent how audio signals flow in and out of each plugin
- With a little more work, auto-quit could be enabled/disabled for plugins on an individual basis
- With a little more work, auto-quit could be implemented for instrument plugins
- AudioBuffer can be used with SharedMemory
- AudioBuffer could be used in plugins for their buffers
This new system works so long as the silence flags for each channel remain valid at each point along the effect chain. Modifying the buffers without an accompanying update of the silence flags could violate assumptions. Through unit tests, the correct functioning of AudioBuffer itself can be validated, but its usage in AudioBusHandle, Mixer, and a few other places where track channels are handled will need to be done with care.
---------
Co-authored-by: Sotonye Atemie <sakertooth@gmail.com>
Allows detaching a window from LMMS's main window, making things like working on multiple screens easier.
The behavior of detached windows can be customized in the Settings.
Closes#1259
---------
Signed-off-by: Dalton Messmer <messmer.dalton@gmail.com>
Co-authored-by: Hyunjin Song <tteu.ingog@gmail.com>
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
Co-authored-by: SpomJ <mihail_a_m@mail.ru>
Improves performance when moving mixer channels to the left or right using simple Qt layout operations and swapping of values. This commit also addresses a deadlock in Mixer::mixToChannel by ensuring we lock and unlock the same channel regardless of any move operations occurring simultaneously on another thread.
When #7454 was merged, Timelines were refactored to emit a positionJumped signal whenever their internal tick position was changed via Timeline::setTicks. This was meant as an easy way to account for all situations where the user might drag or click or somehow forcefully set the position of a timeline mid-playback.
However, the current way timeline syncing is implemented between the song editor and piano roll (when doing record-play in the piano roll) is by constantly calling setTicks on the piano roll's timeline whenever the song editor's timeline moves. This inadvertantly means that positionJumped is being emitted, even when the timeline is tracking smoothly.
This PR attempts to address this issue by adding an optional parameter to Timeline::setTicks which allows the caller to opt out of emitting positionJumped. Ideally, we would have a better system for syncing timelines together, one which does not rely on forcefully calling setTicks. But for now, this should work.
---------
Co-authored-by: Sotonye Atemie <satemiej@gmail.com>
Requested by a user on the forum. Sometimes, it can get tricky when two clips are right next to each other, and you're trying to resize one of them. Either you resize the right one, or the other one starts resizing behind the other, so you end up having to move one over, then resize it, then move it back. And it's just not awesome.
However, this can be mitigated by simply increasing the clip resize grip width. Currently, it is at 4 px, but increasing it to 8 px makes it easier to specifically target which clip you want to resize. It also probably makes it easier to resize clips in general for users who may struggle to aim the mouse perfectly within that 4 pixel gap.
Partially addresses #8188
For some reason, when a knob is set to be a "volume knob", the max volume allowed via double-clicking and typing in a value is 6.0 dB. For knobs such as the amplifier knob in Audio File Processor which go up to 500% (~14 db), this is not enough.
This PR makes it so that the dialog shows the actual max dB value based on the underlying model's max value, instead of being hardcoded to 6.0 dB. This PR does not address the minimum value.
* Clarify VST support to VST2 in README
Updated VST support mention from VST(i) to VST2 in features section for clarity.
* Update README.md
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
* mention LV2 support
---------
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
Refactors the PortAudio backend to fix issues with DirectSound and MME crackling and not loading properly, as well as to improve code quality and maintainability.
---------
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
Fixes#8200. Scrolling to change velocity/panning can affect unselected notes if hovering over them in the bottom section of the piano roll. This PR addresses that.
This refactors the export dialog to no longer use the export_project.ui file and moves it into standard C++ Qt code. This also brings minor changes to the dialog, such as horizontal labels instead of vertical ones.
* reimplement getter and setter of FloatModelEditorBase::m_volumeKnob to avoid undo checkpoint creation
* convert FloatModelEditorBase::m_volumeKnob from BoolModel to bool
* clean up
* remove unnecessary inline keywords and fix formatting
---------
Co-authored-by: Sotonye Atemie <sakertooth@gmail.com>
Fixes#8190
When #7454 was merged, I added a check before emitting playbackPositionJumped to only emit when the song was playing, to prevent LFOs from being reset when the user dragged the timeline while the song was paused. However, I used Song::isPlaying(), which only returns true when the song is not exporting. This caused sample clips to not be updated when the playback position jumped back every loop, which means they just kept playing, ignoring the loop. To fix this, I have changed it to use m_playing, which is true for both exporting and normal playing.
Removes `SampleLoader`. File dialog functions were moved into `FileDialog`. Creation functions were moved into `SampleBuffer`.
---------
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
This PR modifies how the Detuning tool works in the piano roll to let the user modify the automation curve right on the notes. This is actually fairly simple, as automation clips already contain the functions for dragging and removing points like in the Automation Editor. So essentially all that's being done is calculating the relative pos of the mouse to the nearest note, and setting the drag value in the automation clip to that position and key.
You can still access the old automation editor version by shift-click
messmerd asked if the code could be made general for any future note parameter besides detuning, in preparation for CLAP per-note parameter automation. I have refactored the relevant functions to accept an enum type for the kind of note parameter, although currently only detuning is supported.
---------
Co-authored-by: Sotonye Atemie <sakertooth@gmail.com>
Co-authored-by: Sotonye Atemie <satemiej@gmail.com>
Co-authored-by: szeli1 <143485814+szeli1@users.noreply.github.com>
Fixes#8182
When #7454 was merged, the frameOffset variable inside Song::PlayPos which kept track of the actual frame position which the song was playing at relative to the last tick, was moved to Timeline. In the process, the type was inadvertently changed from float to f_cnt_t.
This caused the frame offset to be truncated ever time it was updated in Song::processNextBuffer, causing the song playback to slowly drift back ever so slightly, just a handful of frames every bar. This caused any new sample clips spawned to be created slightly late, becoming out of sync with any existing playing samples.
This PR fixes the issue by reverting the frameOffset variable type to be float again, allowing it to track sub-sample offsets over time.
Fixes#8138
Essentially, when playing beat notes in the pattern editor, technically, they have an internal length of 0. However, when the NotePlayHandle is created and recieves that value of 0, it realizes it's supposed to be a beat note, so it asks the instrument track what the default length of note should be. For most instruments, that defaults to whatever the length of the envelope is (no matter whether the envelope is enabled or not. Is that a good system? I don't know.) However, AudioFileProcessor does it custom and returns the length of its sample in frames.
However, the frame count it returned did not take into account that the sample rate of lmms could be different from the sample rate of the sample. This PR fixes that issue by multiplying by the correct sample rate ratio.
Closes#7869
This PR aims to fix linking bugs and it aims to make linking code faster. In the future I would like to replace controller and automation code with linked models, so it is essential for this feature to work as efficiently as possible.
Before this PR:
- AutomatableModels store a list of AutomatableModels that they are linked to.
- setValue() and other functions make recursive calls to themself resulting in the #7869 crash.
- Each AutomatableModel can unlink from other AutomatableModels, unlinking is the inverse operation to linking.
After this PR:
- AutomatableModels store a pointer to an other AutomatableModel making a linked list. The end is connected to the first element resulting in a "ring".
- setValue() and others are now recursion free, the code runs for every linked model, more efficiently than before.
- Each AutomatableModel can NOT unlink form other AutomatableModels, unlinking is NOT the inverse operation to linking. AutomatableModels can unlink themself from the linked list, they can not unlink themself from single models.
---------
Co-authored-by: allejok96 <allejok96@gmail.com>
The TrackContentWidgets in the pattern editor rely on the positionChanged signal to be sent when the pattern index changes so that they know when to update. This signal was removed in #7454, but this PR puts it back.
Previously, this PR simply added a new signal to the Timeline class and had it emitted by the TimeLineWidget class in order to solve #7351.
However, as messmerd pointed out in his review comments, that was quite a hacky solution, and ideally the positionChanged signal would be solely managed by the backend Timeline class, while the frontend TimeLineWidget class would connect to those signals, instead of the other way around.
This PR is no longer a simple bugfix, but instead a refactoring of the Timeline/TimeLineWidget signal/slots.
Changes
- The positionChanged signal and updatePosition slot were removed from TimeLineWidget and moved to Timeline.
- Removed PlayPos, and instead store the timeline position in a TimePos along with a separate frame offset counter variable. The functions to set the ticks/timepos in Timeline emit the positionChanged signal. (Also, may emit positionJumped signal if the ticks were forcefully set--see below)
- The pos() method and PlayPos m_pos were removed from TimeLineWidget;
- The constructor for TimeLineWidget no longer requires a PlayPos reference.
- Since each TimeLineWidget stores a reference to its Timeline, a new method was added, timeline(), for other classes to access it.
- Removed array of PlayPos'es in Song. Now each Timeline holds their respective TimePos.
- Song's methods for getPlayPos were changed to return the TimePos of the respective Timeline. The non-const versions of the methods were removed because Timeline does not expose its TimePos to write.
- All of the places where Timelines are used were updated, along with their calls to access/modify the PlayPos. For example, occurrences of m_timeline->pos() were replaced with m_timeline->timeline()->pos(), and calls to m_timeline->pos().setTicks(ticks) were changed to m_timeline->timeline()->setTicks(ticks).
- ALSO: Removed m_elapsedMilliseconds, m_elapsedBars, and m_elapsedTicks from Song. The elapsed milliseconds is now handled individually by each Timeline.
- NEW: The m_jumped variable has been removed from Timeline. Now jumped events emit Timeline::positionJumped automatically whenever the ticks are forcefully set. This means it is no longer necessary to call timeline->setJumped(true) or timeline->setFrameOffset(0) whenever the playhead position is forcefully set. Many places in the codebase were already missing these calls, so this may fix some unknown bugs.
---------
Co-authored-by: Dalton Messmer <messmer.dalton@gmail.com>
Co-authored-by: saker <sakertooth@gmail.com>
Co-authored-by: Alex <allejok96@gmail.com>
* Fix Shift+Space corrupting playback state when used on stopped editors
Shift+Space (togglePause) was allowing state corruption by modifying
m_playing and m_paused without checking m_playMode. This created an
impossible state where m_playing=true while m_playMode=None, causing
the regular Space key to stop working.
Added a guard to ensure togglePause() only operates when something is
actually playing (m_playMode != PlayMode::None). This prevents the
corrupted state and maintains proper play/pause/stop behavior.
Fixes#8036
---------
Co-authored-by: regulus79 <117475203+regulus79@users.noreply.github.com>
Fixes#8152
Revert PianoRoll closing when empty (caused issues with detached windows)
Remove TextFloat warning when there are no instruments/multiple clips (showed up when reloading project)
(Keep clip creation for empty projects)
Add an icon and updated message in empty PianoRoll.
Mark PianoRollWindow invalid when there is no clip, making buttons impossible to click
---------
Co-authored-by: Fawn <rubiefawn@gmail.com>
Previously, loading a new sample in an AFP instance with reverse enabled would desync the button from the model, such that the button would indicate the sample was reversed despite the new sample not actually being reversed.
This commit fixes this behavior so that samples loaded into an AFP instance with reverse enabled are actually reversed.