MNS Music Engine – Music Non Stop


Real-Time Generative Music System

countersoul-javascript-code

What is the MNS Music Engine (MNS-ME) ?

The MNS Music Engine (MNS-ME) is a sophisticated real-time generative music system implemented in JavaScript, designed to produce dynamic, evolving musical compositions through algorithmic processes. The system combines advanced sequencing algorithms with integrated synthesis architecture to create structured yet unpredictable musical patterns suitable for extended listening periods.

The engine serves as the foundation for our music applications and interactive projects, providing deterministic yet organic musical generation capabilities with comprehensive real-time control.

Core Architecture

The MNS Music Engine operates through five primary subsystems working in concert to generate musical content dynamically during playback with minimal latency.

Sequencer Engine generates musical patterns using multiple algorithmic approaches including walk-based melody generation, contour-based psychological modeling, and harmonic rhythm placement.

Synthesis Engine produces audio through parallel subtractive and FM synthesis paths with polyphonic voice management, supporting both classic analog-style synthesis and modern frequency modulation techniques.

Sequencer Engine Synthesis Engine Effects Processing AUDIO Event System Parameter Control System MIDI VISUAL ━━━ Audio Signal Flow ┅┅┅ Control & Events

Effects Processing applies spatial and temporal processing including chorus, delay, convolution reverb, and multi-band EQ through a carefully optimized signal chain.

Event System provides real-time communication for visualization and external integration, broadcasting detailed information about each musical event without impacting audio performance.

Parameter Control manages hierarchical preset configurations with comprehensive MIDI integration, enabling both programmatic control and external hardware interface.

Advanced Sequencing Algorithms

The sequencer employs seven distinct generation algorithms, each optimized for specific musical contexts and capable of dynamic selection during runtime.

Walk Generation creates melodic phrases through pitch neighbor transitions with configurable step size and drift parameters, producing natural melodic contours through controlled randomization.

Contour Generation implements psychology-based melody generation using directed elastic Gaussian selection, based on music cognition research principles for creating inherently musical melodic shapes.

Arpeggio Generation generates arpeggiated patterns with variable direction options including up, down, up-down, down-up, and random sequences, with configurable octave range and pitch randomization.

walkGen contourGen arpGen harmonicRhythmGen chordGen euclideanGen stepGen Melodic phrases through pitch neighbor transitions Psychology-based melody using directed elastic Gaussian selection Arpeggiated patterns with variable direction and octave range Chord changes using musical phrasing and beat emphasis Chord voicings across rhythmic patterns with configurable spacing Bjorklund algorithm for evenly distributed rhythmic patterns Predefined pitch sequences with probability-based triggering

Harmonic Rhythm Generation places chord changes using musical phrasing principles and beat emphasis rather than mechanical distribution, creating more natural harmonic movement.

Chord Generation outputs chord voicings across rhythmic patterns with configurable voice count and spacing, supporting complex harmonic progressions.

Euclidean Generation implements the Bjorklund algorithm for evenly distributed rhythmic patterns, providing mathematically optimal rhythm distribution.

Step Generation plays through predefined pitch sequences with probability-based triggering, allowing for both strict sequencing and probabilistic variation.

Probabilistic Sequencing Model

Each sequence operates through a multi-layered probability system where note triggering depends on the interaction between step-specific probabilities and global density parameters. This creates sequences that maintain recognizable motifs while introducing controlled variation at multiple structural levels.

The evolution system allows sequences to gradually mutate over time while preserving musical coherence through restoration mechanisms that periodically return to original patterns. This balance ensures both continuity and development in the generated musical content.

Step Probability Steps 1.0 0 0.3 1 0.8 2 0.2 3 0.9 4 0.4 5 0.7 6 0.2 7 Global Density (0.7) Note Triggered if (random < stepProb × density)
Played Note
Skipped Note
Evolved Note

Synthesis Architecture

The synthesis engine supports both subtractive and FM synthesis approaches within a unified voice architecture. Each voice contains dual oscillators with multiple waveforms including pulse wave generation, dedicated noise generator with independent filtering and envelope, multi-stage envelope systems for amplitude, filter, and vibrato modulation, and bi-quad filters with resonance control and separate high-pass filtering.

Voice allocation uses a pre-allocated pool approach with round-robin distribution, eliminating voice stealing artifacts and ensuring consistent performance. Each part can be configured for polyphony levels with automatic pan spread distribution across the stereo field.

The FM synthesis implementation uses oscillator 2 as a modulator for oscillator 1 frequency, with dedicated depth and ratio controls optimized for musical expressiveness.

Effects Processing Chain

Audio processing follows a carefully optimized signal path: soft clipping provides harmonic saturation with automatic gain compensation, three-band EQ offers low shelf, parametric mid, and high shelf filtering, dynamic limiting prevents output clipping while maintaining musical dynamics, stereo chorus creates enhanced stereo imaging through independent left/right LFOs, tempo-synchronized delay provides rhythmic echoes with BPM synchronization, and convolution reverb delivers high-quality spatial processing with damping controls.

SYNTH OUTPUT SENDS Reverb Delay ROUTE TO: Chorus Distortion Compressor Master CLIPPER 3-EQ LIMITER OUT

Deterministic Randomness System

The system employs the xor128 algorithm to generate multiple independent random streams including global stream for system-wide musical decisions, individual sequence streams per musical part to prevent cross-contamination, dedicated audio stream for synthesis buffer generation, and separate graphics stream for visualization elements.

Beyond uniform distribution, the system provides Gaussian random generation for more natural-sounding musical variation, creating organic fluctuations in timing, pitch selection, and parameter modulation that avoid mechanical artifacts.

Seed-based initialization ensures identical musical output given the same initial conditions, enabling systematic exploration of the musical possibility space while maintaining creative unpredictability.

Real-Time Event System

The engine implements a comprehensive event system that broadcasts detailed information about each musical event in real-time, including note data, timing information, velocity, duration, and voice allocation details. This system enables sophisticated visualization, analysis, and external control integration for our interactive applications.

The event system supports custom readiness checking for synchronized startup with visual interfaces, ensuring precise timing coordination between audio and visual elements in our projects.

MIDI Integration

The MIDI system provides comprehensive input and output capabilities through the WebMIDI API, supporting real-time parameter control via continuous controllers, note-based part muting and control, program change for preset switching, sustain pedal control per channel, and TouchOSC integration for advanced control surfaces.

MIDI controllers are mapped to engine parameters through configurable scaling functions including linear, exponential, and power curves, enabling intuitive real-time control over synthesis, sequencing, and global parameters.

Parameter Architecture

Parameters are organized into a three-tier hierarchy with global parameters controlling engine-wide settings including tempo, key, effects levels, and structural elements, audio parameters defining synthesis characteristics, effects routing, and voice allocation for individual parts, and sequencer parameters controlling pattern generation, evolution, and musical behavior for each musical part.

All parameters can be modified during playback without audio artifacts, enabling smooth transitions and live performance applications. The preset system facilitates rapid exploration of musical territories while maintaining the ability to return to previous configurations.

Applications and Integration

The MNS Music Engine serves as the foundation for our various music applications and interactive projects. Its architecture supports both autonomous operation and interactive musical performance, making it suitable for ambient installations, interactive media projects, educational applications, and live performance systems.

The comprehensive event system and real-time parameter control enable integration with visualization systems, interactive interfaces, and external control hardware, providing the flexibility needed for diverse creative applications while maintaining professional audio quality and musical coherence.