Read this blog on my website: www.paulhembree.com

Saturday, February 1, 2020

Life in the AI/ML Music Space

After two years of teaching electronic music and creative coding as a lecturer or visiting assistant professor for small liberal arts colleges in upstate New York, between 2016 and 2018,  I started working full time as an asset developer at Amper Music in May 2018, where I continue to work.

Our primary service at Amper is an artificial intelligence music composer that generates audio based on low-dimensional user input.  Users specify a just a duration, genre and mood, and get back an audio file (or instrumental stem or multi- tracks) of music that closely fits those specifications.  We use a proprietary database of high-quality instrumental samples that are assembled into an audio file on the fly.  Every composition is unique down to each note and rhythm, and there are no pre-recorded loops or musical phrases.

You can use our web-based tool, Amper Score, to compose music, with or without video, or you can use our artificial intelligence via our API, and build your own application.

Of course, the mood elicited by any music is an elastic and contingent phenomena based on listener culture and life experience, but our work focuses on meeting a general listener's genre expectations, assuming enculturation into the Western media landscape.  Furthermore, users are free to explore similar moods to find things that fit their expectations better, or even to use drastically contrasting moods from our database.

Asset development involves supervising the assembly of data that describes each genre-mood pair.  Some of that data takes the form of code as well, so the job involves an enjoyable mixture of programming, composition, analysis and orchestration.  We thrive by exploiting the distinctive features found in these genre-mood pairs.  Our artificial intelligence greatly magnifies the potential of human creativity - we make a relatively small number of decisions that can produce a virtually infinite number of compositions within a genre-mood pair. 

Amper is used within Reuters' productivity suite, we've been part of Adobe Premiere, and we've collaborated on projects with LG, Betty Who, and MasterCard, among others.

Visit our website here: https://www.ampermusic.com/
x

Friday, May 13, 2016

New recordings of very different pieces!

I had the pleasure of working on two incredibly different composition projects this Winter and Spring, allowing me, on the surface, to explore extreme ends of two completely opposite aesthetics.  A listener might have difficulty imagining one composer as responsible for both pieces, which, I suppose, makes me difficult to "brand."  However, I refuse to pigeonhole myself in a particular stylistic or aesthetic camp, and I relish the opportunity to work with sound in multiple ways.  If you don't like one piece, listen to the other!



Cerebral Hyphomycosis (2016) was premiered at UC San Diego by cellist Tyler J. Borden.  You can watch the video here (with footage by SALT Arts Documentation, and post-production video editing by me):


This is a duet for live cellist, and a virtual cellist that is reconstructed from a database of video clips, using a technique that might be called "lo-fi plundergraphic audiovisual concatenative synthesis."  As the live cellist plays, his pitch is analyzed and used to look up the location of other pitches played in the database, one of which is selected and looped.  Each pitch on the live cello, with a generous margin of error, triggers a specific pitch from the database.  For instance, any time the live cellist plays between a low C and C#, a B an octave above is triggered in the virtual cellist.  A variety of intervals are available between the live and virtual cellists using this fixed pitch transform, which is maintained throughout the piece, and used to generate the harmonic material.   The database of video clips is drawn from music that is primarily monophonic (in this case the J.S. Bach Cello Suite in G Major), and harmonically dissimilar to Cerebral Hyphomycosis.



Flieg, Kindlein, Flieg (2016) was premiered by Amasong, Champaign-Urbana's Premiere Lesbian/Feminist Chorus.  It is based on the music and text of the traditional German lullaby Schlaf, Kindlein, Schlaf, and a traditional parody of that lullaby's text, Maikäefer Flieg (many thanks to Petra Watzke for suggesting this source material, and for her translation assistance!).  As the text moves from the lullaby to the parody, the music responds to the parody's depictions of war and flight through paralysis, fragmentation, and dissonant harmonic material. After religious visions, sinister black hounds, and obsessive fugue states, the music and text return to a place that has been shattered, but where a dream of "home" remains.




Despite their surface-level differences, fundamentally, both pieces arise from a contrapuntal impulse, but with extremely different rules governing the behavior of musical voices, and extremely different timbres determined by performing forces and electronic processing.  The choral piece slowly distorts a folk lullaby with chromatic, but tonally informed voice-leading.  The cello piece is essentially two part counterpoint, with a pitch resolution in quarter-tones, and an additional third part generated by ring modulation.  Both pieces also capture and reconfigure pre-existing music: the lullaby used in the choral piece is a well known folk tune in Germany, while the cello piece synthesizes something reminiscent of Ferneyhough's Time and Motion Study by procedurally assembling fragments of J.S. Bach.

Monday, February 1, 2016

2016 Premieres, Performances and Speaking Events

I'm extremely grateful to have a wide range of opportunities for composition, performance and guest speaking this year.  Here's a run-down of the events this Winter and Spring:


1/3: Sam Wells performs Blue Sky Catastrophe at Spectrum (NYC), as part of Radical Brass, Vol. 4

2/2: Guest lecturing on my audio-visual improvisation platform, Apocryphal Chrysopoeia, at Illinois State University

2/18: Performing Apocryphal Chrysopoeia, along with Reynolds' Mind of Winter with the Callithumpian Consort at New England Conservatory

3/1: Guest lecturing on Apocryphal Chrysopoeia at the University of Illinois

3/15-3/24: Performing of my Rilke setting, Light from Outside, and a new arrangement of the arias from Reynolds' Justice with Tiffany DuMouchelle and Steve Solook at the University at Buffalo (SUNY).

3/29-4/9: Workshopping new electroacoustic components of Reynolds' FLiGHT in San Diego

4/6-4/11: Performing and recording my new piece Cerebral Hyphomycosis with cellist TJ Borden, while producing a new recording of Brian Ferneyhough's Time and Motion Study No. 2 with Borden and James Bean

5/22: Performing the new electroacoustic components of FLiGHT at the Philipps Collection in Washington DC with Ross Karre and the JACK Quartet


Summer and Fall performances include Reynolds' Shifting/Drifting with Irvine Arditti at the Darmstadt International Summer Course, and Reynolds' complete FLiGHT in San Diego and New York.


Tuesday, January 5, 2016

Flieg, Kindlein, flieg...

I just finished my first (acoustic) piece of 2016: a dark interpretation of the equally dark German traditional, Maikäfer flieg, for Amasong, Champaign-Urbana's premiere lesbian/feminist chorus. Their spring concert will be on the subject of flight - as in, fleeing from civil war. 


Sunday, January 3, 2016

Just finished Weave I and II for the FLIGHT project.

I just finished up creating 20 minutes of computer music, woven around spoken word, for the FLIGHT project.  If you are on the East Coast, you can hear this stuff on May 22:




Saturday, December 26, 2015

Dynamic Music for Game Audio Example

Happy holidays folks. Want to see how long you can survive in a procedurally generated zombie survival game?

I've been exploring the dynamic music possibilities afforded by the new Unity 5 engine, while getting reacquainted with the C language family. So I hacked one of the tutorial projects. This uses Audio Mixer Snapshots and the .TransitionTo method to do most of the work with crossfading.
I composed the music back in Spring 2015 for teaching the concept of dynamic audio in my Game Music course, but it otherwise hasn't seen the light of day. It uses a totally conventional way to harmonically re-contextualize a little pulse wave duet with the bass, in order to switch between relaxed and tense game states. The pulse wave duet works in both major, or its relative minor, and you can simply toggle back and forth. I also use a little timbral contrast as well... my new favorite thing is running a NES emulator into various guitar amp/cabinet emulators.
The entire piece is meant to be modular, with each member of the duet optional, and an additional two layers that follow the same kind of harmonic neutrality to allow their re-contextualization.  In later versions, these layers could be toggled in relation to game states, but the current game simply doesn't have enough persistent states to warrant their use.  So they are successively presented here.
The initial idea was to create a dynamic music example that undergrads with only basic music theory and DAW skills can get a handle on immediately... and it now lives on in this little web app.

Check out a more detailed description with musical excerpts on my website.

Or, play the game.

Saturday, October 24, 2015

Time & Motion Study No. 2 Remix Experiment





Here is a brief test of an a/v playback engine, using overlapping, crossfading loops and delays with special compositing modes and ring modulation.  This is a sketch for a companion piece to Ferneyhough's Time & Motion Study No. 2.

Featuring:
TJ Borden, cello
Michael Matsuno, page turner
Tina Tallon, source video