andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2011-11-21 11:00 am

[identity profile] pigwotflies.livejournal.com 2011-11-21 05:18 pm (UTC)(link)
That is the crucial question. And I'm not sure what the answer is. That Bach piece bugged me because it's one I know very well (and have attempted myself) and that version just sounds 'off'. Not in tuning, but the timing. Expressive playing comes from lots of factors - timing, pressure of the bow, tuning, vibrato, phrasing. In the app, the variations in timing in notes don't make sense. I think it's that the same pattern of tiny delays in timing is repeated all the time, which a human player wouldn't do. Any one phrase could sound like that from a human, but when they all do, the effect is artificial and odd. A sort of musical uncanny valley, maybe?

It also sounds wrong to me because the strings are plucked, whereas the piece as written is bowed. It sounds more like it's being played on a harpsichord that a cello.

[identity profile] khbrown.livejournal.com 2011-11-21 05:24 pm (UTC)(link)
Perhaps for those more knowledgeable about these things than me:

Could a Markov model be used for a computer musician with an initial seed value to make the variations in duration, pitch etc., variable but also consistent in their variations?

It's a long time since I've used sequencers, but they used to have humanize and quantize functions, the former making the machine more human and the latter making the human more machine like.