Interesting technology -- that is, assuming it is actually technology
that is extracting the notes from the original recordings. Doing this
might be possible with today's technology but I would have my doubts
that the process is all software and not a fair amount manual human
intervention.
This is not the same but may be similar to the process of how
printed books are scanned and translated to computer text. The
computer can optically scan 80% to 90% of the text without error.
On a good day it might get 90% to 97% depending on the quality of
the print. Still a human has to go over the scanned text and correct
the occasional error made by the computer. This can be further
automated by the computer "reading" the text after being scanned to
look for common phrases and then fill in the likely missing words.
Still, in the end the text needs to be proofed by a human.
Having a computer optically recognize characters on a page is one
thing, deciphering notes in music is other. The printed page is a
reasonably predictable "environment" for the computer. The page has
well defined margins, the font will stay consistent throughout the
book, there is a constant contrast between the white of the page and
the black of the ink, and letter spacing and format generally will not
change from the start of the book to the end.
Even computer speech recognition relies on a somewhat well defined
format to figure out the beginning of one word and the next. Speech
tends to have pauses and pitch changes (Such as the pitch going up at
the end of a question).
Compared to computer book scanning or speech recognition, music on the
other hand has almost none of the "clues" that these technologies rely
on to decode letters and spoken words. If it were single notes with
brief silence between them, converting the notes to MIDI would not be
all that difficult and is somewhat analogues to the computer
deciphering spoken words and converting each to text one word at a
time.
As humans we are able to decode notes in music because we have a mental
frame of reference of just what music is and what it should sound like
even if we have never heard a particular piece before. Even so, it is
difficult for us to define in the minute detail needed by a computer
what it is about a particular sound that we will call music versus what
we call noise. If a note is out of sequence, off pitch, or off time,
we innately flag this as an error that needs to be fixed.
As humans beings these errors just tend to _bother_ us. The computer
(or more accurately the software) has no such ability to notice when
something is not "quite right", when something sounds a just a bit off.
Music tends to be continuous, very analog "data" with ebbs and
flows, and multiple layers of overlapping sounds with unpredictable
intensities and lengths, and with varying tonal qualities. Without
some kind of predictable clues as to where one note begins and the next
one ends, having a computer dissect the music into its various
overlapping notes and intensities is a non-trivial task indeed.
Zenph may have had some success in pioneering their technology, but
the showmanship style and lack of details in the presentation are very
suspect. I think they are looking for investment money to bring the
technology further. I suspect that the technology is not nearly as
advanced as they are perhaps presenting.
Ray Finch
[ Pianist Jim Turner (now performing with the Jim Cullem Jazz Band
[ in San Antonio, Texas) was the editor of the live recordings made
[ for the Marantz Pianocorder. I asked him how many hours were spent
[ in editing a song, and he replied that the goal was to produce
[ 30 seconds of edited music each day. -- Editor (Robbie)
|