Real-Time Generative Music System

What is the MNS Music Engine (MNS-ME) ?
The MNS Music Engine (MNS-ME) is a sophisticated real-time generative music system implemented in JavaScript, designed to produce dynamic, evolving musical compositions through algorithmic processes. The system combines advanced sequencing algorithms with integrated synthesis architecture to create structured yet unpredictable musical patterns suitable for extended listening periods.
The engine serves as the foundation for our music applications and interactive projects, providing deterministic yet organic musical generation capabilities with comprehensive real-time control.
Core Architecture
The MNS Music Engine operates through five primary subsystems working in concert to generate musical content dynamically during playback with minimal latency.
Sequencer Engine generates musical patterns using multiple algorithmic approaches including walk-based melody generation, contour-based psychological modeling, and harmonic rhythm placement.
Synthesis Engine produces audio through parallel subtractive and FM synthesis paths with polyphonic voice management, supporting both classic analog-style synthesis and modern frequency modulation techniques.
Effects Processing applies spatial and temporal processing including chorus, delay, convolution reverb, and multi-band EQ through a carefully optimized signal chain.
Event System provides real-time communication for visualization and external integration, broadcasting detailed information about each musical event without impacting audio performance.
Parameter Control manages hierarchical preset configurations with comprehensive MIDI integration, enabling both programmatic control and external hardware interface.
Advanced Sequencing Algorithms
The sequencer employs seven distinct generation algorithms, each optimized for specific musical contexts and capable of dynamic selection during runtime.
Walk Generation creates melodic phrases through pitch neighbor transitions with configurable step size and drift parameters, producing natural melodic contours through controlled randomization.
Contour Generation implements psychology-based melody generation using directed elastic Gaussian selection, based on music cognition research principles for creating inherently musical melodic shapes.
Arpeggio Generation generates arpeggiated patterns with variable direction options including up, down, up-down, down-up, and random sequences, with configurable octave range and pitch randomization.
Harmonic Rhythm Generation places chord changes using musical phrasing principles and beat emphasis rather than mechanical distribution, creating more natural harmonic movement.
Chord Generation outputs chord voicings across rhythmic patterns with configurable voice count and spacing, supporting complex harmonic progressions.
Euclidean Generation implements the Bjorklund algorithm for evenly distributed rhythmic patterns, providing mathematically optimal rhythm distribution.
Step Generation plays through predefined pitch sequences with probability-based triggering, allowing for both strict sequencing and probabilistic variation.
Probabilistic Sequencing Model
Each sequence operates through a multi-layered probability system where note triggering depends on the interaction between step-specific probabilities and global density parameters. This creates sequences that maintain recognizable motifs while introducing controlled variation at multiple structural levels.
The evolution system allows sequences to gradually mutate over time while preserving musical coherence through restoration mechanisms that periodically return to original patterns. This balance ensures both continuity and development in the generated musical content.
Synthesis Architecture
The synthesis engine supports both subtractive and FM synthesis approaches within a unified voice architecture. Each voice contains dual oscillators with multiple waveforms including pulse wave generation, dedicated noise generator with independent filtering and envelope, multi-stage envelope systems for amplitude, filter, and vibrato modulation, and bi-quad filters with resonance control and separate high-pass filtering.
Voice allocation uses a pre-allocated pool approach with round-robin distribution, eliminating voice stealing artifacts and ensuring consistent performance. Each part can be configured for polyphony levels with automatic pan spread distribution across the stereo field.
The FM synthesis implementation uses oscillator 2 as a modulator for oscillator 1 frequency, with dedicated depth and ratio controls optimized for musical expressiveness.
Effects Processing Chain
Audio processing follows a carefully optimized signal path: soft clipping provides harmonic saturation with automatic gain compensation, three-band EQ offers low shelf, parametric mid, and high shelf filtering, dynamic limiting prevents output clipping while maintaining musical dynamics, stereo chorus creates enhanced stereo imaging through independent left/right LFOs, tempo-synchronized delay provides rhythmic echoes with BPM synchronization, and convolution reverb delivers high-quality spatial processing with damping controls.