FooBarBaz

De Pontão Nós Digitais
Revisão de 15h35min de 5 de junho de 2012 por VilsonVieira (discussão | contribs) (Nova página: FooBarBaz é experimento em livecoding. Apresentado no Festival Contato 2011. Mais sobre Livecoding. == Códigos == Para os códigos usados no projeto, veja AudioArt. == V...)
(dif) ← Edição anterior | Revisão atual (dif) | Versão posterior → (dif)
Ir para navegaçãoIr para pesquisar

FooBarBaz é experimento em livecoding. Apresentado no Festival Contato 2011.

Mais sobre Livecoding.

Códigos

Para os códigos usados no projeto, veja AudioArt.

Vídeos

- live coding presentation, part 1 a basic principles of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil.

http://vimeo.com/33012735

- live coding presentation part1 b: REM and cows about Rapid Eyes Movement (REM) and use of cows in the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil.

http://vimeo.com/33018740

- live coding presentation part2 improvisation improvisation part of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil.

http://vimeo.com/33019291

- presentation part3 soundscapes The part where we used soundscapes in the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil.

http://vimeo.com/33025717

- presentation part4 improvisation2 ending endind of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil.

http://vimeo.com/33025913

Relembrando o experimento

Renato Fabbri por lists.cs.princeton.edu 28/11/11 para ChucK

This is what I used and it was quite enough given the execussion had another live coder and a PD and mixer improviser:

http://ubuntuone.com/7P9ZFMFVVa9cBr4LZ1xtjg

My replace map doesnt work though (last line of the text file on the link). Any idea? BTW, we live coded for more de 2 thousand people here in Brasil at Festival Contato. Some say about ~5 thousand, i guess ~3,5k. Chuck live-coding with Vim rvl3z. Vilson Vieira, the other live-coder, used Emacs. We projected both desktops at the same time.

cheers!,

Renato


Renato Fabbri 30/11/11 para ChucK

well, i wanted to do any documentation of what i did, so here it goes as it came. No sound, just a visual screenshot (perfect for reading as you hear some music of your preference :P) I did not see how it is and cant look that now. The mpeg files are running ok here, i am using mplayer in linux. But they did not run in Kaffeine and another player (dont reacall its name).

presentation-part1.mpeg http://ubuntuone.com/0w8vde6POCJdUhAfDRCB0G

presentation-part1-REM-and-cows.mpeg http://ubuntuone.com/2biXjEGbLARAG9gyJf8MmL

presentation-part2-improvisation.mpeg http://ubuntuone.com/2l6W8HhAEcw5DTcuLxP2wn

presentation-part3-soundscapes.mpeg http://ubuntuone.com/6UXsfV59e7AnvOtAJjWKAO

presentation-part4-improvisation2-ending.mpeg (uploading) http://ubuntuone.com/55te5BtDx7Fb9DdkPlezLV

Dont know if they are uploaded right, i should put them on Vimeo. I would like to have my partners screens, but he had a problem with his lap. Vilson, where is the code u used to play with? all the best and cheers,

Renato Renato Fabbri

02/12/11 para listamacambira, ChucK

> I like how you make a glorious mess instead of the stark minimalism of > the other livecoding I've seen. I'm not sure how this would scale, but > the difference is exciting.

Thanks! I like that also. The idea is to use the desktop to play and make it more appealing. That bouncing white ball is 'processing'. The cow is 'cowsay'. Some years ago i did what i now call LDP (Linux Desktop Playing) with jack-rack, ardour, audacity, PD, chuck, python and even audacious. That was a really big mess, specially with ABT:

http://trac.assembla.com/audioexperiments/browser/ABeatDetector

Maybe what we are doing is live coding with heritances from LDP.

Anyway, these are the 5 small videos at Vimeo, so anyone can take a look: - live coding presentation, part 1 a basic principles of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil. http://vimeo.com/33012735 - live coding presentation part1 b: REM and cows about Rapid Eyes Movement (REM) and use of cows in the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil. http://vimeo.com/33018740 - live coding presentation part2 improvisation improvisation part of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil. http://vimeo.com/33019291 - presentation part3 soundscapes The part where we used soundscapes in the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil. http://vimeo.com/33025717 - presentation part4 improvisation2 ending endind of the live coding presentation we did on 20/11/2011 for about 3,5 thousand people on Festival CONTATO, São Carlos, Brazil. http://vimeo.com/33025913 cheers, rfabbri Vilson Vieira 02/12/11 para Renato, ChucK, listamacambira Hey Kassen and other Chuckists! I think it is interesting to note we used an alternative approach considering the sync between Renato and me. The sound was generated by Renato using ChucK/Vim/Jack and by me using ChucK/Emacs/Jack without sync. The audio from both of us was passed to a Pd patch running on a third computer operated by Gilson Beck, another composer, part of the trio (FooBarBaz). Gilson spatialized and mixed the audio generated by us with a visual interface: the movements of his hands were tracked by a "color tracker" implemented by Ricardo Fabbri on Pd/GEM and the x/y coordinates defined the panning effects. On this way we could mix both audio in certain times, creating a dialogue between my sound, Renato's sound and Gilson's. Unfortunatelly I lost my laptop and all the codes within after the presentation, but I used a screen similar to Renato's recorded screencasts, using ChucK as a live sampler, similar to Thor's ixilang approach. A snippet of the code was saved here: https://gist.github.com/1379142 I think Gilson can send you more details about his Pd patch and some videos about the human body interface tracked by colors. All the best. foo.ck: // manipula esse ["samples/fx/s20.wav"] @=> Foo.name; [0.] @=> Foo.prop; [.25, .15] @=> Foo.rate; [2., 1., 1., 4.] @=> Foo.du; [.8] @=> Foo.gain; foosp.ck: // executa esse primeiro que o foo.ck. e antes de tudo o tg.ck public class Foo {

   static string name[];
   static float prop[];
   static float rate[];
   static float du[];
   static float gain[];

} ["samples/fx/s22.wav"] @=> Foo.name; [.0] @=> Foo.prop; [1.] @=> Foo.rate; [4.] @=> Foo.du; [0.] @=> Foo.gain; TimeGrid tg; tg.set(1::minute/60/2, 8, 10); tg.sync(); SndBuf buf => JCRev j => dac; .5 => j.gain; .2 => j.mix; 0 => int i; while (true) {

   Foo.name[0] => buf.read;
       Math.trunc(buf.samples()*Foo.prop[i%Foo.prop.size()]) $ int => buf.pos;
       Foo.gain[i%Foo.gain.size()] => j.gain;
       Foo.rate[i%Foo.rate.size()] => buf.rate;
       tg.beat*Foo.du[i%Foo.du.size()] => now;
       i++;

} tg.ck //basic timing operations abbreviated public class TimeGrid {

   dur beat;
   dur meas;
   dur sect;
   int nbeat;
   int nmeas;
   //phase and magnitude of offset
   float measPhase;
   dur measOffset;
   fun void set(dur mybeat, int nb, int nm) {
       mybeat => beat;
       nb => nbeat;
       beat*nbeat => meas;
       nm => nmeas;
       meas*nmeas => sect;
   }
   //sync to beat
   fun void sync() {
       beat - (now % beat) => now;
   }
   fun void sync(dur T) {
       T - (now % T) => now;
   }
   //how long to sync to this duration
   fun dur syncDur(dur T) {
       return (T - (now % T));
   }
   //minimum time
   fun dur tmin(dur a, dur b) {
       return (a < b) ? a : b;
   }
   //get beat in relation to section
   fun int guess() {
       //this approach would not count sections
       //return ((now % sect) / beat) $ int;
       //this approach is completely global
       return (now / beat) $ int;
   }
   //get the mod rhythm
   fun int bmod(int r) {
       return (r%nbeat);
   }
   fun int mmod(int r) {
       return (r/nbeat%nmeas);
   }
   fun int smod(int r) {
       return (r/nbeat/nmeas);
   }
   //section markers
   int g;
   int b;
   int m;
   int s;
   int i;
   int j; //for anything, really
   int c; //counter in measure
   int d; //counter in section
   //events for stuff
   Event newMeas;
   Event newSect;
   
   //update markers
   fun int up() {
       guess() => g;
       //experimental
       if ( b-bmod(g)>0 ) { //if b decreases
         0=>c;
         newMeas.broadcast(); 
       }
       else c++;
       //TODO: make a c but for the measure
       if ( m-mmod(g)>0 ) { //if m decreases
         0 => d;
         newSect.broadcast();
       }
       else d++;
       
       bmod(g) => b;
       mmod(g) => m;
       smod(g) => s;
       i++;
       return true;
   }
   //update the markers of another timeGrid
   fun int up( TimeGrid tg ) {
       this.up();
       b => tg.b;
       m => tg.m;
       s => tg.s;
       g => tg.g;
       c => tg.c;
       i => tg.i;
       j => tg.j;
       return true;
   }
   //pause: make shred wait until input low
   //ill-concieved, really!, because it can't monitor a changing input
   /*
   fun void pause( int a ) {
       while ( a ) {
           beat=>now;
           sync();
       }
   }
   */
   

}