Design applied to multimedia installations

design aplicado a instalações multimídia com projeção

From the 13th to the 15th of March, i’ll be at the Trackers Cultural Center, in São Paulo, Brazil, to give a short workshop which purpose is to provide an intense and productive space for the study and production of multimedia installations that include the use of projection techniques. This workshop is based on previous ones I have given on the subject, but this time we’ll be focusing on design thinking and models applied to projects that address the use of projected images.

We’ll be using three main tools in order to experiment with this sort of media: VPT – a well known open source projection mapping tool developed by HC Gilje -, LPMT – also an open source projection mapping tool, a very nice option for Linux users – and Google Sketchup, in order to sketch and analyse possible projection situations.

Projection mapping has become a well known and creative use of visual language applied to unusual contexts. the technology calls the attention of a wide variety of professionals, enthusiasts and researchers, with different knowledge backgrounds. hence, a proper design model is important, considering the diversity of disciplines and techniques such projects usually require. It is interesting to understand how a projector works, what are its components, how does the image “fit” the projection support, how should a video be formatted in order to suit specific projection surfaces, how can a projection mapping situation be properly calculated, and so forth.

Therefore, the program of the workshop is proposed as the following:

Day 1 (13/03): introduction to design thinking applied to projection technology

What sorts of knowledge are related to projection mapping technology? What is a projector, and how it works? What sorts of equipment should be considered in such projects? What about video formatting? These common questions will be addressed while we introduce a methodology that can be used in several kinds of projection situations. We’ll do a little exercise in which the participants will be invited to create an installation and sketch it on Google Sketchup.

Day 2 (14/03): VPT and LPMT  – adapting images using software

This is the day in which i’ll introduce the two projection mapping tools, and explain a little bit about certain coherences and differences between virtual and “real” world texturing. We’ll be going through typical mapping techniques, such as vertex distortion, masking and frame blending. as an exercise, the participants will be invited to adapt images and videos of their choice to the support they have chosen in the previous day.

Day 3 (15/03): wrapping all up

In the last day, participants will be able to finish their installations and have time for specific questions that eventually could not be answered during the previous days.

To apply for the workshop or to get more info, click here (only in Portuguese).

Hope to see you there!

Posted in Around, Mistakes Tagged with: , , , , , , , ,

Fatto in Casa // Panoramica 2012

fatto in casa - panoramica 2012

And 2012 starts with another presentation of Fatto in Casa, with one of my mentors, Jorge Crowe. We’ll be performing in the Panoramica Festival, at the Espacio Fundación Telefonica, in Buenos Aires (1540 Arenales St.), on Saturday, March 10th, at 7pm. Expect a few improvements on the visuals this time: audiovisual sync via Arduino + PDuino, toys that get lit when in use over the jugetomatica (the interface i use for the realtime visual remix), and a few things more 😉

Please visit the official website of the event for more info:

Hope to see you there!

Posted in Around, Mistakes Tagged with: , , , ,

Modified toys occupy Buenos Aires

fatto in casa jorge crowe

If you are not aware of the modified toys that are occupying Buenos Aires (ok, just an opportunist joke), you will now have four chances to meet them. Jorge Crowe and I will be presenting the live audiovisual performance Fatto in Casa in several opportunities in the next few days at the capital of the Porteños.

For those who don’t know the perfomance, Fatto in Casa explores the re-use and modification of old toys and objects as an audiovisual platform. Jorge plays the soundscapes with a huge set of little frankensteins, while i modify video images taken from his set in real time, using a very improvised prototype based on the Reactable project. We’ve already presented the performance two times earlier this year, and we invite you to check out the next toy attacks!

The first jam will be held on Saturday, December 10th, at the Street a delika, in the Zafra espacio de formación artística (3096 Sarmiento St.), at 5pm.

street a delika

On the 11th, we will be performing at the E.A.S.T. – Encuentro de arte sonoro en transito, in the Galería Patio del Liceo (2729 Santa Fé Av.), from 4pm on.

encuentro de arte sonoro en transito

Then, on Thursday, December 15th, we will be at the Espacio Ecléctico, in a very nice event called Genealogía del Objeto. our noise starts at 8pm.

genealogía del objeto

Finally, on Saturday, December 17th, we’ll be performing at the 3rd edition of the well known Dorkbot. The time of the performance is still to be announced, so stay tuned in this blog or in my Facebook.

dorkbot 3 buenos aires

Below are some images from our first two performances in September of this year, in the Festival de Narrativas Hiper/Textuales and in one of the Sintomática parties. I shall soon be posting a little video taken from these two presentations.

See ya around!

Posted in Around, Mistakes Tagged with: , , , , , , , , , , ,

Fatto in Casa

fatto in casa

It’s with great pleasure i tell you i’ll be performing with Jorge Crowe on September 10th, doing some live imaging for his performance and “artronics” Fatto in Casa. It’s going to be a double night: first presentation takes place at the Seminario de Narrativas Hiper/Textuales, at the Centro Cultural Recoleta, at 8pm. After that, we’ll be heading to the Centro Cultural Zaguán, and perform againg at 1am of Sunday in the Sintomatica party.

Well, hope to see you there! Below is a video from the performance Jorge did in Panoramica, in March 2011, at the Espacio Telefonica.

Posted in Around, Mistakes Tagged with: , , , , , , , , , , , , , , ,

presentation of process: demolición/ construcción residence

presentación del proyecto loci - demolición/construcción

what does it mean to historicise today? how do contemporary information storage and transmission mechanisms change the way the human memory functions? these are some of the questions that the project loci aims to address.

the name comes of the plural substantive of the latin term locus, which relates to a variety of concepts such as place, situation, state, etc.). the project was presented last saturday, july 16th, in la perla memorial, as part of my participation in the demolición/construcción residence. still on its early development stage, the web application was presented as a prototype, in an effort to discuss the development process with the other participant artists.

basically, my approach to the subject of the residence does not consider what can be thought as its most iconic aspects, as i agree with the idea of giorgio agamben about this clear incapacity we have while trying to understand these demonstrations of the inhuman, which can be identified in many historical circumstances of changes in the configuration of power, as in the argentinian dictatorship period, which lasted practically for almost 20 years, between the 1960’s and the 1980’s. having that incapacity in mind, i decided to privilege more broad matters that rely on the intersections between memory, art and politics. this conception gradually took form as i was reading agamben’s remnants of auschwitz and jose luis brea’s culture_RAM. based on the idea of testimony as a “relation between the sayable and unsayable” and of archive as “a system of relations between the said and unsaid” (AGAMBEN, 199: p. 45), i decided to discuss a broad concept of memory dimensions in contemporaneity, by proposing a meta-medium which makes references to certain human and computer models of archiving and retrieving information. as luis brea reminds us, after electronic and digital media, human memory continually tries to adapt itself to the reality imposed by the infoscape, an on-going mutation from a mechanism of retrieval to a channel for judgment of recent and simultaneous experiences. in this sense, the moralizing aspect of memory – the iconization of the past – has decayed in favor of more urgent needs: the preterization of the future and the networked understanding of an instantaneous, complex and continuous morphing information pattern. thus, luis brea relates the a ‘obsolete’ form of history to the computer ROM (Read Only Memory), and discourses about a change to what can be metaphorized as the computer RAM (Random Access Memory), stating that the notion of historical truth as a product of its materiality – to say, by archivable forms of culture – or of its provability (the testimonial of specific groups) is giving place to a systematic network of inter-projective processes, in which marginalized perspectives of traditional history emerge and interact with each other, proposing multi-discursive alternatives to what has been accepted as status quo. thus, the “truth” doesn’t matter that much: as long as the information is capable of crossing networks and has affective power, it has its value, no matter the nature of its discourse.


based on these assumptions, loci proposes a deterritorialization of objectified forms of history: archived information of different sorts of digitalized media formats (radio, television, newspapers, books, and so forth) are processed by a software that operates in three levels. one is mathematical, a continuous flux of data textures which are instantaneous results of algorithm calculus, juxtaposing and superposing different layers and types of media; other is linguistic, which relates the metadata which is associated with all information present in a databank, reconfiguring the network of relations at the textual level, operation  that interferes directly in what is “remembered” and emerges as interface;  which by its turn characterizes the third level, the perceptual: the fragmented images, texts and sounds that compose a continuous meta-form.

the databank for this 1.0 version of loci is composed of information that ran in mass media of 1970’s and 1980’s argentina. most part of this material was obtained during research in the archivo provincial de la memoria de córdoba, the cispren and the archivo del servicio de radio y televisión de la universidad nacional de córdoba. also, some friends helped with indications of popular culture references of that period. some youtube helped too :)

this databank is not meant to be permanent nor unique. loci is being developed not to be the information it handles, but the very process of in formation as a systematic metaphor of contemporary memory processes. this means that, in future versions, you will be able to load your own databank to loci, and integrate it with other databanks available in the web.

mapa mental loci

in other words, loci is proposed as a proto-memory, an evolving system that tries to ‘remember’ facts – rescue memorized information –  and, by doing so, it generates fictions and counter-facts (or hyper-facts, as i would like to frame) by saving the modified “memories” as new facts in the databank. also, it is intended that loci will look for related information present in the web, by using APIs of engines such as flickr or twitter, extending the historical and meaning spectrum to other limits, to the limit of what wittgenstein defined as language barrier.

the application will be available in the web as soon as i finish the 1.0 version, which shall happen in the following weeks. the release will be announced in this blog. the core development is being done in processing, and the source code of the application will be available on its release.

more details on the development of the application will be revealed soon. i also intend to publish here a text that conceptually supports the project (as i initially announced a couple of weeks back).

the presentation of the project was very pleasing precisely because of the feedback i obtained from the other artists and thinkers, and i would like to say a big thank you to all for that. a special thanks goes to lina lopes (who i would also like to thank for taking most of the presentation pictures), gabriela halac, graciela de oliveira and eugenia almeida for their interest and contribution to the subjects of the project. and thanks to ángel poyónfhernando poyón and edgar calel for the great exchange of ideas during or time in fundación pluja, which i would like to thank for the hospitality.

i would also like to thank the la perla memorial, the cispren, the archivo provincial de la memoria de córdoba and the universidad nacional de córdoba for their support to the research.

and of course, a big thank you to graciela de oliveira and soledad sanchez for the incredible effort in organizing the phronesis criolla residence.

here are a few images and a video from the presentation day, in la perla. stay tuned for more information about the loci project.

Posted in Around, Mistakes, Words Tagged with: , , , , , , , , , ,

camera and sonar visualization

as the last exercise proposed to the workshop datajockey, the processing and arduino codes below create a visual representation of two different phenomenas: first, processing grabs the input video coming from a camera attached to the computer and converts it into a matrix of colored pixels. these pixels are modified real-time, depending on the distance of an object being detected by a sonar, which is attached to an arduino board.

here is the code for the processing side of the experiment. the video library used here is gsvideo, since it works in linux:


//camera-sonar module visualization
//based on the example 'mirror', of the gsvideo processing library,
//and the code found in this forum post by dvnanness:
import processing.serial.*;
import codeanticode.gsvideo.*;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
GSCapture video;
Serial myPort;
int numSensors = 1;
int linefeed = 10;
int sensors[];
float read1;
int cellZFactor;
//camera-sonar m
// Size of each cell in the grid
int cellSize = 15;
void setup() {
size(640, 480, P3D);
//set up columns and rows
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);
// uses the default video input, see the reference if this causes an error
video = new GSCapture(this, width, height, 30);
// list all the available serial ports
// change the 0 to the appropriate number of the serial port
// that your microcontroller is attached to.
myPort = new Serial(this, Serial.list()[0], 9600);
void draw() {
if (sensors != null) {
// if valid data arrays are not null
// compare each sensor value with the previuos reading
// to establish change
read1 = map(sensors[0], 0, 600, 1, 30);
cellZFactor = int(read1);
if (video.available()) {
// Size of each;
// begin loop for columns
for (int i = 0; i < cols;i++) {
// begin loop for rows
for (int j = 0; j < rows;j++) {
// where are we, pixel-wise?
int x = i * cellSize;
int y = j * cellSize;
int loc = (video.width - x - 1) + y*video.width; // reversing x to mirror the image
// the rects' color and z position depends on the information from the sonar input,
// the brightness and the colors captured by the camera
color c = video.pixels[loc];
float sz = (brightness(c) / 255.0) * cellSize + cellZFactor;
fill(red(c)/cellZFactor,(blue(c)+(cellZFactor)), (green(c)*(cellZFactor)/3));
rect(x + cellSize/2, y + cellSize/2, sz, sz);
void serialEvent(Serial myPort) {
// read the serial buffer:
String myString = myPort.readStringUntil(linefeed);
// if you got any bytes other than the linefeed:
if (myString != null) {
myString = trim(myString);
// split the string at the commas
// and convert the sections into integers:
sensors = int(split(myString, '\n'));
// print out the values you got:
 for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {
print("Sensor " + sensorNum + ": " + sensors[sensorNum] + "\t");
// add a linefeed after all the sensor values are printed:

and here is the arduino code. This is arranged for sonars similar to the model HC-SR04.


//defining ports
const int pingPin = 7;
const int pingPin8 = 8;
void setup() {
void loop(){
long duration, inches, cm;
pinMode(pingPin, OUTPUT);
digitalWrite(pingPin, LOW);
digitalWrite(pingPin, HIGH);
digitalWrite(pingPin, LOW);
pinMode(pingPin8, INPUT);
duration = pulseIn(pingPin8, HIGH);
inches = microsecondsToInches(duration);
cm = microsecondsToCentimeters(duration);
long microsecondsToInches(long microseconds){
return microseconds / 74 / 2;
long microsecondsToCentimeters(long microseconds){
return microseconds / 29 / 2;

more details on how to hook a HC-SR04 to an arduino board here.

Posted in Mistakes

twitter data visualization

3D data visualization for the last twitter messages containing a certain term or group of terms. the visualization was made in processing, using a ‘compact’ version of the twitter4j library and the twitter API. this was proposed as an exercise for the workshop datajockey, that took place in the Museum of Image and Sound of São Paulo, Brazil, from may 17th to 31st, 2011.

it searches for a term in the last tweets stored in twitter database, and shows those tweets in a 3D space, the position and the color are given according to the time of the post. if two or more posts are close in time, a line that connects them in the space, forming structures. the size of the cubes are given by the minute of the post.

sorry if i can’t post the actual sketch here, it’s due to processing.js limitations. but the source code is below, you can try it for yourself. before you test this out in processing, you must download the twitter4j library file, and insert it in a folder called “code”, inside your sketch folder. also, you need to get a couple of keys to access twitter databank. to do that, register a new application in twitter developers page.

//twitter 3D data visualization
//based on the sketch '3D Processing World', by Josue Page
//the twitter connection is made by using the twitter4j java library:
// Before you use this sketch, register your Twitter application at
// Once registered, you will have the info for the OAuth tokens
//setting twitter API info:
static String OAuthConsumerKey = "PUT YOUR CONSUMER KEY HERE";
static String OAuthConsumerSecret = "PUT YOUR CONSUMER SECRET KEY HERE";
static String AccessToken = "PUT YOUR ACCESS TOKEN HERE";
static String AccessTokenSecret = "PUT YOUR ACCESS TOKEN SECRET HERE";
//define parameters
//a word to search for in the tweets database:
String searchTerm = "PUT YOUR SEARCH TERM HERE";
//a number of tweets to work with (you can choose any number up to 100) :
int numOfTweets = 50;
java.util.List statuses = null;
Twitter twitter = new TwitterFactory().getInstance();
RequestToken requestToken;
String[] theSearchTweets = new String[numOfTweets];
Date[] tweetTimeData = new Date[numOfTweets];
String[] tweetTimeStrings = new String[0];
int[] tweetTimeInts = new int[0];
color[] colors = new color[0];
int[] coords = new int[0];
int objects = numOfTweets, zoom = -300, xCube, yCube, zCube;
Pts[] cubes = new Pts[objects];
color bgColor = 0, lineColor = 255;
float R, G, B;
PFont theFont;
void setup() {
  size(1024, 750, P3D);
  translate(width, 0, 0);
  theFont = createFont("Arial",1000);
  for (int i = 0; i < numOfTweets ; i++) {
    String t = theSearchTweets[i];
    cubes[i] = new Pts(coords[i], coords[i+1], coords[i+2], colors[i], 1, t);
void draw() {
  translate(width/2, height/2, width/2+zoom);
  rotateX(map(mouseY, 0, height, -2*PI, 2*PI));
  rotateY(map(mouseX, 0, width, -2*PI, 2*PI));
  background(bgColor, 50);
  for (int u = 0; u < objects ; u++) {
    for (int v=0;v<objects;v++) {
      if (abs(cubes[u].z-cubes[v].z)<200) {
        if (abs(cubes[u].x-cubes[v].x)<200) {
          if (abs(cubes[u].y-cubes[v].y)<200) {
            stroke(lineColor, 50);
            vertex(cubes[u].x, cubes[u].y, cubes[u].z);
            vertex(cubes[v].x, cubes[v].y, cubes[v].z);
  if (mousePressed) {
    bgColor = 255;
    lineColor = color(255, 0, 0);
  else {
    bgColor = 0;
    lineColor = 255;
class Pts {
  int x, y, z;
  float tem;
  color cubeColorC;
  String theText;
  Pts(int a, int b, int c, color d, float e, String t) {
  x = a;
  y = b;
  z = c;
  cubeColorC = d;
  tem = b/20;
  theText = t;
  void drawCubes() {
    if (mousePressed) {
      fill(0, 50);
    else {
      fill(lineColor, 50);
    text(theText, x+30, y, 100, 1000, z);
    translate(x, y, z); 
  void change() {
    if (x <- width) {
      x =- width + 10;
    else {
      if (x > height) {
        x = height - 10;
      else {
        x = x + int(random(-3, 3));
    if (y <- height) {
      y =- height+10;
    else {
      if (y > width) {
        y = width - 10;
      else {
        y = y + int(random(-5, 5));
      if (z > width) {
        z = width - 10;
      else {
        z = z + int(random(-5, 5));
      if ( z<- width) {
        z =- width + 10;
void keyPressed() {
  if (keyCode == 40) {
    zoom -= 300;
  if (keyCode == 38) {
    zoom += 300;
//twitter API functions
// Initial connection
void connectTwitter() {
  twitter.setOAuthConsumer(OAuthConsumerKey, OAuthConsumerSecret);
  AccessToken accessToken = loadAccessToken();
// Loading up the access token
private static AccessToken loadAccessToken() {
  return new AccessToken(AccessToken, AccessTokenSecret);
// Search for tweets
void getSearchTweets(String searchTerm) {
  String queryStr = searchTerm;
  try {
    Query query = new Query(queryStr);    
    query.setRpp(numOfTweets); // Get 10 of the 100 search results  
    QueryResult result =;    
    ArrayList tweets = (ArrayList) result.getTweets();    
    for (int i = 0; i < tweets.size(); i++) {	
      Tweet t = (Tweet)tweets.get(i);	
      String user = t.getFromUser();
      String msg = t.getText();
      Date d = t.getCreatedAt();	
      theSearchTweets[i] = msg;
      tweetTimeData[i] = d;
      println("Tweet by " + user + " at " + d);
  } catch (TwitterException e) {    
    println("Search tweets: " + e);  
void convertDateToString(){
   for (int i = 0; i < tweetTimeData.length; i++){
    SimpleDateFormat df = new SimpleDateFormat("dd/MM/yyyy/HH/mm/ss");
    String s = df.format(tweetTimeData[i]);
    String sArray[] = new String [0];
    sArray = splitTokens(s, "/");
    for (int j = 0; j < sArray.length; j++){
      tweetTimeStrings = append(tweetTimeStrings, sArray[j]);
void convertStringToInts(){
   for (int i = 0; i < tweetTimeStrings.length; i++){
    int num = int(tweetTimeStrings[i]);
    tweetTimeInts = append(tweetTimeInts, num);
    println("tweetTimeInts at the index of " + i + " is: " + num);
void convertIntsToColor(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      R = map (tweetTimeInts[i], 0, 24, 0, 255);
      G = map (tweetTimeInts[(i+1)], 0, 60, 0, 255);
      B = map (tweetTimeInts[(i+2)], 0, 60, 0, 255);
      color clr = color (R, G, B); 
      colors = append (colors, clr);
      //println("color stored is = R " + red(clr) + ", G " + green(clr) + ", B " + blue(clr));   
void convertIntsToPosition(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      xCube = int(map (tweetTimeInts[i], 0, 24, -width, width));
      yCube = int(map (tweetTimeInts[(i+1)], 0, 60, -height, height));
      zCube = int(map (tweetTimeInts[(i+2)], 0, 60, -width, width));
      coords = append (coords, xCube);
      coords = append (coords, yCube);
      coords = append (coords, zCube);
      //println("position stored is = xCube " + xCube + ", yCube " + yCube + ", zCube " + zCube);   
void checkColors(){
   for (int i = 0; i < objects; i++){
      println("color stored is = R " + red(colors[i]) + ", G " + green(colors[i]) + ", B " + blue(colors[i]));
void checkCoords(){
   for (int i = 0; i < coords.length; i=i+3){
      println("position stored is = xCube " + coords[i] + ", yCube " + coords[i+1] + ", zCube " + coords[i+2]);
Posted in Mistakes Tagged with: , , , , , ,

text data visualization

visualizing text data

i’ve uploaded a processing sketch in openprocessing platform, done as an exercise for the datajockey workshop. the sketch can be used to visualize textual data in different types of geometrical representations.

press “C” for circles, “S” for squares and rectangles, “A” for arcs and “L” for lines.

the above example is a visualization of a couple of paragraphs taken off the machado de assis’ novel dom casmurro.

you can check the source code and download the sketch here.


Posted in Mistakes Tagged with: , , , , , , , , , ,