design applied to multimedia installations

Standard

design aplicado a instalações multimídia com projeção

from the 13th to the 15th of march, i’ll be at the trackers cultural center, in são paulo, brazil, to give a short workshop which purpose is to provide an intense and productive space for the study and production of multimedia installations that include the use of projection techniques. this workshop is based on previous ones i have given on the subject, but this time we’ll be focusing on design thinking and models applied to projects that address the use of projected images.

we’ll be using three main tools in order to experiment with this sort of media: vpt – a well known opensource projection mapping tool developed by hc gilje -, lpmt – also an opensource projection mapping tool, a very nice option for linux users – and google sketchup, in order to sketch and analyze possible projection situations.

projection mapping has become a well known and creative use of visual language applied to unusual contexts. the technology calls the attention of a wide variety of professionals, enthusiasts and researchers, with different knowledge backgrounds. hence, a proper design model is important, considering the diversity of disciplines and techniques such projects usually require. it is interesting to understand how a projector works, what are its components, how does the image “fit” the projection support, how should a video be formatted in order to suit specific projection surfaces, how can a projection mapping situation be properly calculated, and so forth.

therefore, the program of the workshop is proposed as the following:

day 1 (13/03): introduction to design thinking applied to projection technology

what sorts of knowledge are related to projection mapping technology? what is a projector, and how it works? what sorts of equipment should be considered in such projects? what about video formatting? these common questions will be addressed while we introduce a methodology that can be used in several kinds of projection situations. we’ll do a little exercise in which the participants will be invited to create an installation and sketch it on google sketchup.

day 2 (14/03): vpt and lpmt  - adapting images using software

this is the day in which i’ll introduce the two projection mapping tools, and explain a little bit about certain coherences and differences between virtual and “real” world texturing. we’ll be going through typical mapping techniques, such as vertex distortion, masking and frame blending. as an exercise, the participants will be invited to adapt images and videos of their choice to the support they have chosen in the previous day.

day 3 (15/03): wrapping all up

in the last day, participants will be able to finish their installations and have time for specific questions that eventually could not be answered during the previous days.

to apply for the workshop or to get more info, click here (only in portuguese).

hope to see you there!

thinking the urbe: avlab córdoba september 2011

Standard

the 4th installment of the avlab córdoba meetings (remember the last one?), curated by daniel gonzález xavier, will take place at the local centro cultural españa, on the  19th and 20th of september. this time, the event proposes a discussion around a theme that is brought back every once in a while, underlining an insistent necessity to rethink the urban and informational scapes. it is clear now that the urbe can be analyzed in terms of language, that is, as a composition of mediatic processes that continually modify the patterns of its experience. cellphones and online tools guide human nodes (a term i occasionally will rather use instead of users) to their destination, while indicate their movement through streets and places of interests, connecting millions of people in a narrative network, a virtual culture of the exchange of opinions (fact that allows certain authors to identify in contemporary networks a revival of certain aspects of oral cultures). cameras everywhere, either picturing a stage for someone’s life particular episode, either serving as control mechanisms. the sound of cars, planes, helicopters, undiscriminated publicity. radio-frequency communications. different but inter-related layers of information that can be thought as a platform to analyze, interpret and represent urban scapes and their complex vectors.

with a fast-changing mobile, network and simulation technology scenario, it’s always interesting to bring such discussion forward, not only to understand such a schizophrenic rate of ‘innovations’, but also to try to identify their impact in different cultural contexts, as well as to consider the consequences of an increasing obsolete material culture.

this avlab tries to bring into this discussion a few distinct but somehow intertextual perspectives of a few artists and investigators.

the project bineural-monokultur (christina ruf and ariel dávila) will share their experience with their audio tours, a work that proposes an ‘actorless’ theatre, the city as stage for narrative sound interfaces.

the educators and researchers of the information science, yamila ferreyrra and valerya sbuelz, propose a discussion around the possible cartographies of bodies, synergies and contrasts in between different urban territorialities in córdoba’s context. their workshop will be based on the creation of possible strategies to map such circumstances.

my participation in the discussion concerns projection as a technology and a technique to provide visual interfaces for détournements: the technical moving and realtime image applied into space that superposes layers of visual subjectivity over urban scapes. there will be a quick workshop to present to the participants the basic idea behind projection mapping (based on previous meetings), using the opensource tool vpt. the workshop happens on september 20th, from 10am to 12:30pm.

after the workshops, at 2:00pm, a quick lab will be proposed, in which participant artists and workshop attendants will engage into the development of a possible installation to take place at the cultural center and its surroundings, during the night that closes the event.

please consult the website of the centro cultural españa to check the schedule of the event and to apply for the workshops.

the event will be streamed live by the cce.

hope to see you there!

 

realtime image processing with puredata

Standard

procesamiento de imágenes en tiempo real con puredata

from august 25th to 28th, i’ll be at la cúpula media lab, in córdoba, argentina, for one more workshop. this time we will be using puredata as a tool for generating realtime images, a technique that can be applied to many different transmedia experiments in drama, cinema, performance, dance, vjing and so on. the program of the workshop is described below:

day 1

> discussion of the platform: what is puredata and what can it do?

> pd and libraries installation (a walkthrough in mac os and linux);

> intro to the patch metaphor / non-linear programming;

> intro to GEM/openGL;

> exercise 1: loading and displaying a video in GEM;

day 2

> exercise 2: puredata video mixer;

> exercise 3: realtime image filters;

days 3 and 4

> exercise 4: openCV – computer vision and puredata – face and other visual pattern identification;

> exercise 5: how to use arduino and puredata to modify images;

15 people limitation. you can apply yourself at the la cúpula’s website.

to leave you with an example of possibilities we will be discussing during the workshop, below there’s a video of a play i worked on with ricardo palmieri, gabriel camelo and the group les commediens tropicales. the multimedia scenery was made by using projection mapping techniques, with a combination of puredata and vpt (programmed by palm). hope to see you in córdoba!

 

datajockey in buenos aires

Standard

datajockey buenos aires

i’m honored to announce that i’ll be giving a workshop at flexible lab, buenos aires, in a partnership with laboratorio de juguete. the workshop will be based on the previous one that took place in the Museum of Image and Sound of São Paulo. there will be 4 meetings. in the first one, a discussion on the subject of computation and data visualization is proposed. for the next 3 meetings, 3 exercises are proposed: one of text data visualzation; one of access and appropriation of the twitter databank; and the last one, an experiment with data coming from the physical world.

also, i’ll be giving an introduction to the subject of the workshop on the thursday before the 1st meeting, 21/07, from 19:00 to 20:30, also at flexible lab.

if you happen to be interested in participating, write an e-mail to labodejuguete@gmail.com with a brief curriculum and a few words about your expactations.

well, hope to see you there. special thanks to jorge crowe and to the flexible lab!

twitter data visualization

Standard

3D data visualization for the last twitter messages containing a certain term or group of terms. the visualization was made in processing, using a ‘compact’ version of the twitter4j library and the twitter API. this was proposed as an exercise for the workshop datajockey, that took place in the Museum of Image and Sound of São Paulo, Brazil, from may 17th to 31st, 2011.

it searches for a term in the last tweets stored in twitter database, and shows those tweets in a 3D space, the position and the color are given according to the time of the post. if two or more posts are close in time, a line that connects them in the space, forming structures. the size of the cubes are given by the minute of the post.

sorry if i can’t post the actual sketch here, it’s due to processing.js limitations. but the source code is below, you can try it for yourself. before you test this out in processing, you must download the twitter4j library file, and insert it in a folder called “code”, inside your sketch folder. also, you need to get a couple of keys to access twitter databank. to do that, register a new application in twitter developers page.

//twitter 3D data visualization
//by medul.la
//http://medul.la
//based on the sketch '3D Processing World', by Josue Page
//http://www.openprocessing.org/visuals/?visualID=19216
//the twitter connection is made by using the twitter4j java library:
//http://twitter4j.org
 
// Before you use this sketch, register your Twitter application at dev.twitter.com
// Once registered, you will have the info for the OAuth tokens
 
//setting twitter API info:
static String OAuthConsumerKey = "PUT YOUR CONSUMER KEY HERE";
static String OAuthConsumerSecret = "PUT YOUR CONSUMER SECRET KEY HERE";
static String AccessToken = "PUT YOUR ACCESS TOKEN HERE";
static String AccessTokenSecret = "PUT YOUR ACCESS TOKEN SECRET HERE";
 
//define parameters
//a word to search for in the tweets database:
String searchTerm = "PUT YOUR SEARCH TERM HERE";
//a number of tweets to work with (you can choose any number up to 100) :
int numOfTweets = 50;
 
java.util.List statuses = null;
Twitter twitter = new TwitterFactory().getInstance();
RequestToken requestToken;
String[] theSearchTweets = new String[numOfTweets];
Date[] tweetTimeData = new Date[numOfTweets];
String[] tweetTimeStrings = new String[0];
int[] tweetTimeInts = new int[0];
color[] colors = new color[0];
int[] coords = new int[0];
int objects = numOfTweets, zoom = -300, xCube, yCube, zCube;
Pts[] cubes = new Pts[objects];
color bgColor = 0, lineColor = 255;
float R, G, B;
PFont theFont;
 
void setup() {
  size(1024, 750, P3D);
  connectTwitter();
  getSearchTweets(searchTerm);
  convertDateToString();
  convertStringToInts();
  convertIntsToColor();
  convertIntsToPosition();
  checkColors();
  checkCoords();
  background(255);
  translate(width, 0, 0);
  theFont = createFont("Arial",1000);
  for (int i = 0; i < numOfTweets ; i++) {
    String t = theSearchTweets[i];
    //println(t);
    cubes[i] = new Pts(coords[i], coords[i+1], coords[i+2], colors[i], 1, t);
  }
}
 
void draw() {
  translate(width/2, height/2, width/2+zoom);
  rotateX(map(mouseY, 0, height, -2*PI, 2*PI));
  rotateY(map(mouseX, 0, width, -2*PI, 2*PI));
  background(bgColor, 50);
  for (int u = 0; u < objects ; u++) {
    cubes[u].drawCubes();
    for (int v=0;v<objects;v++) {
      if (abs(cubes[u].z-cubes[v].z)<200) {
        if (abs(cubes[u].x-cubes[v].x)<200) {
          if (abs(cubes[u].y-cubes[v].y)<200) {
            stroke(lineColor, 50);
            beginShape(LINES);
            vertex(cubes[u].x, cubes[u].y, cubes[u].z);
            vertex(cubes[v].x, cubes[v].y, cubes[v].z);
            endShape();
          }
        }
      }
    }
    cubes[u].change();
  }
 
  if (mousePressed) {
    bgColor = 255;
    lineColor = color(255, 0, 0);
  }
  else {
    bgColor = 0;
    lineColor = 255;
  }
}
 
class Pts {
  int x, y, z;
  float tem;
  color cubeColorC;
  String theText;
 
  Pts(int a, int b, int c, color d, float e, String t) {
  x = a;
  y = b;
  z = c;
  cubeColorC = d;
  tem = b/20;
  theText = t;
 
  }
 
  void drawCubes() {
    if (mousePressed) {
      fill(0, 50);
    }
    else {
      fill(lineColor, 50);
    }
    noStroke();
    fill(cubeColorC);
    text(theText, x+30, y, 100, 1000, z);
    pushMatrix();
    translate(x, y, z); 
    fill(cubeColorC);
    box(tem);
    popMatrix();
  }
 
  void change() {
    if (x <- width) {
      x =- width + 10;
    }
    else {
      if (x > height) {
        x = height - 10;
      }
      else {
        x = x + int(random(-3, 3));
      }
    }
    if (y <- height) {
      y =- height+10;
    }
    else {
      if (y > width) {
        y = width - 10;
      }
      else {
        y = y + int(random(-5, 5));
      }
      if (z > width) {
        z = width - 10;
      }
      else {
        z = z + int(random(-5, 5));
      }
      if ( z<- width) {
        z =- width + 10;
      }
    }
  }
}
 
void keyPressed() {
  if (keyCode == 40) {
    zoom -= 300;
  }
  if (keyCode == 38) {
    zoom += 300;
  }
}
 
//twitter API functions
 
// Initial connection
void connectTwitter() {
  twitter.setOAuthConsumer(OAuthConsumerKey, OAuthConsumerSecret);
  AccessToken accessToken = loadAccessToken();
  twitter.setOAuthAccessToken(accessToken);
}
 
// Loading up the access token
private static AccessToken loadAccessToken() {
  return new AccessToken(AccessToken, AccessTokenSecret);
}
 
// Search for tweets
void getSearchTweets(String searchTerm) {
 
  String queryStr = searchTerm;
 
  try {
    Query query = new Query(queryStr);    
    query.setRpp(numOfTweets); // Get 10 of the 100 search results  
    QueryResult result = twitter.search(query);    
    ArrayList tweets = (ArrayList) result.getTweets();    
 
    for (int i = 0; i < tweets.size(); i++) {	
      Tweet t = (Tweet)tweets.get(i);	
      String user = t.getFromUser();
      String msg = t.getText();
      Date d = t.getCreatedAt();	
      theSearchTweets[i] = msg;
      tweetTimeData[i] = d;
      println(theSearchTweets[i]);
      println("----------------");
      println("Tweet by " + user + " at " + d);
      println("----------------");
      println(tweetTimeData[i]);
      println("----------------");
    }
 
  } catch (TwitterException e) {    
    println("Search tweets: " + e);  
  }
 
}
 
void convertDateToString(){
   for (int i = 0; i < tweetTimeData.length; i++){
    SimpleDateFormat df = new SimpleDateFormat("dd/MM/yyyy/HH/mm/ss");
    String s = df.format(tweetTimeData[i]);
    String sArray[] = new String [0];
    sArray = splitTokens(s, "/");
    for (int j = 0; j < sArray.length; j++){
      tweetTimeStrings = append(tweetTimeStrings, sArray[j]);
      println(tweetTimeStrings[j]);
    } 
   }
}
 
void convertStringToInts(){
   for (int i = 0; i < tweetTimeStrings.length; i++){
    int num = int(tweetTimeStrings[i]);
    tweetTimeInts = append(tweetTimeInts, num);
    println("tweetTimeInts at the index of " + i + " is: " + num);
   }
}
 
void convertIntsToColor(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      R = map (tweetTimeInts[i], 0, 24, 0, 255);
      G = map (tweetTimeInts[(i+1)], 0, 60, 0, 255);
      B = map (tweetTimeInts[(i+2)], 0, 60, 0, 255);
      color clr = color (R, G, B); 
      colors = append (colors, clr);
      //println("color stored is = R " + red(clr) + ", G " + green(clr) + ", B " + blue(clr));   
   }
}
 
void convertIntsToPosition(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      xCube = int(map (tweetTimeInts[i], 0, 24, -width, width));
      yCube = int(map (tweetTimeInts[(i+1)], 0, 60, -height, height));
      zCube = int(map (tweetTimeInts[(i+2)], 0, 60, -width, width));
      coords = append (coords, xCube);
      coords = append (coords, yCube);
      coords = append (coords, zCube);
      //println("position stored is = xCube " + xCube + ", yCube " + yCube + ", zCube " + zCube);   
   }
}
 
void checkColors(){
   for (int i = 0; i < objects; i++){
      println("color stored is = R " + red(colors[i]) + ", G " + green(colors[i]) + ", B " + blue(colors[i]));
   }
}
 
void checkCoords(){
   for (int i = 0; i < coords.length; i=i+3){
      println("position stored is = xCube " + coords[i] + ", yCube " + coords[i+1] + ", zCube " + coords[i+2]);
   }
}

a feedback on augmented architecture

Standard

here’s a few images from the workshop augmented architecture, which took place earlier this may. the workshop introduced a few concepts on projection techniques/language in order to experiment with visual perception concerning objects and environments.

augmented architecture on flickr

augmented architecture on flickr

augmented architecture

augmented architecture

augmented architecture

i would like to thank eduardo ricci for the pictures, and also everyone who has participated in the process of these workshops, it has been a great and constructive experience ;)

here’s the presentation i’ve used to talk a little about the history of projection, and contemporary aspects of what André Parente calls the ‘flee from the black box”, process in which i argue projection mapping is inserted in.

i’ll probably be posting a text i’ve already written concerning the subject, with a few revisions (and a translation to english and spanish). you can download the pdf with the original version here (in portuguese).

stay tuned.

text data visualization

Standard

visualizing text data

i’ve uploaded a processing sketch in openprocessing platform, done as an exercise for the datajockey workshop. the sketch can be used to visualize textual data in different types of geometrical representations.

press “C” for circles, “S” for squares and rectangles, “A” for arcs and “L” for lines.

the above example is a visualization of a couple of paragraphs taken off the machado de assis’ novel dom casmurro.

you can check the source code and download the sketch here.

 

datajockey

Standard

datajockey

“(…) while artists in all disciplines are now routinely computer as a tool in their work, there are still literally only a few artists out there who focus on one of the most fundamental and radical concepts associated with digital computers – that of computation itself (rather than interactivity, network, or multimedia).”.

- Lev Manovich, The Anti-Sublime Ideal in Data Art, p.6.

the concept of computation can be approached as the systematic and autonomous mathematization of communication processes. even tough the complexification of such processes is an undeniable problem, there is a fact - emphasized by Manovich - that is even more essential: the conversion of culture into one single nature of bits and bytes.

the workshop DataJockey tries to address some of these questions concerning computation processes, providing a space of discussion and experimentation about a possible image or aesthetics that dialogues with such problem.

in order to achieve that, the workshop proposes exercises with open source technologies Processing and Arduino. these tools will be used in simple examples of data conversion, so that one can experiment with an image that is real time generated, product of local circumstances (through the use of sensors and cameras) or of other contexts (network information).

from the 17th to the 31st of may, tuesdays and thursdays, in the LabMIS/Museum of Image and Sound of São Paulo.

augmented architecture

Standard

i would like to start the activities in this blog by announcing i’ll be at the espaço cultural trackers, in são paulo, during the 11th, 12th and 13th of may, working on the workshop arquitetura aumentada (augmented architecture), from 8pm to 11pm. the purpose is to provide an environment of reference sharing and experimentations about projection techniques concerning spatial perception. as a digital tool, we will be using the open source software VPT.

for inscriptions, please send mail to video@trackers.cx. more information at the trackers blog.

as a ‘warm-up’, below are images and videos from the last workshop i’ve worked on there, in december/2010:

special thanks to lina lopes and igor spacek for the videos and photos. for more images, please visit lina lopes’ picasa.

i will be posting here images and videos about this next 3-day meeting, so, stay tuned.