top of page

Poetic Signal is not only a concept or a workflow.
It is a working system you can use.
The tools described above are available as a unified toolkit, designed to translate a single visual source into sound, image, and language in real time.


Instead of manually reconstructing each stage, the system allows you to:

  • Convert visual information directly into MIDI structure

  • Generate derived visual elements from the same source

  • Build synchronized audiovisual compositions on a timeline

  • Integrate glitch-based text as a visual signal layer

  • Export high-resolution outputs ready for production

This is not a preset-based generator.
It is a signal transformation environment.
Luma2MIDI / SpectraLine / Vapor Signal / Image Flow / SignalType
All components operate as parts of a single perceptual pipeline.


START USING THE TOOLKIT
Access the Poetic Signal Toolkit and begin transforming your own images into unified audiovisual works.
 

→ Enter the system
https://poetic-signal-portal.vercel.app/

Poetic Signal is an abstract poetic form in which sound, image, and language operate as equal signals within a unified perceptual field.
Rather than treating music, visuals, or text as separate media, the work emerges through a process in which a single visual source is transformed into multiple signal forms and later recombined into a unified audiovisual structure.

The production process unfolds in several stages.

 

1. Visual Spectrum Analysis and MIDI Conversion

The process begins with a chosen visual image.

The color distribution, gradients, and luminance structures contained in the image are analyzed and translated into musical data. This analysis is performed using Luma2MIDI, an application that converts visual information into MIDI signals through multiple analytical approaches.

At this stage, the visual image functions as the origin of the musical structure.

 

2. Musical Construction from MIDI Material

The generated MIDI signals are imported into a digital audio workstation such as Ableton Live.

Here, the MIDI data is treated not as a finished composition but as structural material. The creator intuitively selects instruments, synthesizers, and sound textures, shaping rhythm, harmony, and arrangement.

Through this process, the visual information that initiated the system is reorganized into a musical composition. The MIDI serves as raw material, and the artistic decisions made during arrangement determine the character of the piece.

 

3. Sonic Optimization

Once the composition is completed, the audio file is processed through specialized sound tools such as Harmonic Reframe and Water Brain Modulator.

These tools are used to refine resonance, spatial characteristics, and overall sonic balance.

At this stage, the musical layer of the work reaches its final form.

 

4. Generation of Derived Visual Elements

The same original visual image is then used to generate additional visual material.

Applications such as SpectraLine and Vapor Signal analyze the image’s color structures and gradients to produce derived visual elements, including line-based abstractions, atmospheric mist textures, and other visual effects.

These generated materials retain a structural relationship to the original image while expanding its visual vocabulary.

 

5. Timeline-Based Visual Composition

The completed audio and visual materials are imported into Image Flow, a timeline-based visual composition tool.

Within this environment, images and visual layers are arranged while listening to the music. The artist adjusts sequence, duration, transitions, motion, and layer relationships to construct a visual flow synchronized with the musical structure.

The result is exported as a video sequence.

 

6. Glitch Text Generation

Poetic text is then introduced as an additional signal layer.

The music and text are imported into a system that generates glitch-based typographic visuals synchronized with the sonic structure. In this stage, language functions not merely as subtitles but as a visual signal integrated with the audio environment.

The generated text elements are exported as alpha-channel image sequences using SignalType.

 

7. Final Composition

In the final stage, all elements are assembled within a video editing environment.

These elements include:

  • the completed music

  • the visual composition

  • mist effect clips

  • glitch text layers

Through layering, timing adjustments, and compositional refinement, the work is structured into its final audiovisual form. The finished piece is then exported at the desired resolution and format.

 

Structural Principle

The underlying principle of Poetic Signal lies in the transformation and reintegration of signals.

A single visual source generates multiple forms of expression:

visual information becomes musical structure,
the same visual source produces derived imagery,
language is synchronized with sound as a visual signal.

Through this process, sound, image, and language emerge from a shared origin and coexist within a unified perceptual field.

Poetic Signal is therefore neither simply music nor video nor text.
It is a poetic media form composed of interacting perceptual signals.

ACCESS THE SYSTEM
PROCESS
bottom of page