3rd avlab córdoba meeting

3o encuentro avlab córdoba

on july 12th, from 4pm to 7pm, i’ll be participating on the 3rd avlab córdoba meeting, in centro cultural españa. this meeting proposes interactive installations for kids and “young adults”. i’ll be installating a prototype based on the tagtool open hardware/software device. so if you happen to be in córdoba during this day, come along and have fun doing some digital graffiti with us! you will be able to “virtually” paint architectural spaces and monuments inside the CCE space.

below theres a video about the tagtool project. bring your kiddos!

special thanks to daniel gonzález xavier for the opportunity.

Posted in Around Tagged with: , , , , , , , , , ,

datajockey in buenos aires

datajockey buenos aires

i’m honored to announce that i’ll be giving a workshop at flexible lab, buenos aires, in a partnership with laboratorio de juguete. the workshop will be based on the previous one that took place in the Museum of Image and Sound of São Paulo. there will be 4 meetings. in the first one, a discussion on the subject of computation and data visualization is proposed. for the next 3 meetings, 3 exercises are proposed: one of text data visualzation; one of access and appropriation of the twitter databank; and the last one, an experiment with data coming from the physical world.

also, i’ll be giving an introduction to the subject of the workshop on the thursday before the 1st meeting, 21/07, from 19:00 to 20:30, also at flexible lab.

if you happen to be interested in participating, write an e-mail to labodejuguete@gmail.com with a brief curriculum and a few words about your expactations.

well, hope to see you there. special thanks to jorge crowe and to the flexible lab!

Posted in Around Tagged with: , , , , , , , , , , ,

demolición/ construcción

phronesis criolla_demolición/construcción

from june 27th to july 16th, i’ll be integrating a group of artists and researchers invited for the residence phronesis criolla, proposed by the demolición/construcción project, in córdoba, argentina. leaded by graciela de oliveira and soledad sanchez, the project proposes a discussion between contemporary art and politics, starting from the question of what can be built from what can be considered (at least at some level) destructive.

the residence proposes to take la perla, a facility used as a center of political oppression and torture during the dictatorship period in the country (declared a memorial in 2007), as an object of investigation and as an atelier.

i’ll be posting here some material as part of the documentation of the process of whatever comes up during the experience. i’m also intending on posting a text that will indicate some of the principles i’m using as a platform for the creation process.

also, during this period in córdoba, i’ll probably be participating on one or two workshops, which i will also be announcing and documenting here, so stay tuned.

special thanks to graciela de oliveira and soledad sánchez for the opportunity!

Posted in Around Tagged with: , , , , , , ,

a feedback on datajockey

here are some pictures taken during the workshop datajockey, which took place during from the 17th to 31st of may, in the LabMIS/Museum of Image and Sound of Sao Paulo. during the 5 meetings, we went through 3 exercises, each of these purposing different approaches to the understanding of computation phenomena, basic programming logic, data mining and conversion, and computerized visual representations. the workshop was mainly based on processing.

i would like to thank all the participants for the interest, collaboration and patience (i am a learner just as well, after all). Hope that these meetings can serve as an inspiration for present and future projects. special thanks to paola de marco and elizabeth pereira, of the LabMIS production team, and to lina lopes for the assistance during the workshop and for the pictures.


Posted in Around

camera and sonar visualization

as the last exercise proposed to the workshop datajockey, the processing and arduino codes below create a visual representation of two different phenomenas: first, processing grabs the input video coming from a camera attached to the computer and converts it into a matrix of colored pixels. these pixels are modified real-time, depending on the distance of an object being detected by a sonar, which is attached to an arduino board.

here is the code for the processing side of the experiment. the video library used here is gsvideo, since it works in linux:


//camera-sonar module visualization
//by medul.la
//based on the example 'mirror', of the gsvideo processing library,
//and the code found in this forum post by dvnanness:
import processing.serial.*;
import codeanticode.gsvideo.*;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
GSCapture video;
Serial myPort;
int numSensors = 1;
int linefeed = 10;
int sensors[];
float read1;
int cellZFactor;
//camera-sonar m
// Size of each cell in the grid
int cellSize = 15;
void setup() {
size(640, 480, P3D);
//set up columns and rows
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);
// uses the default video input, see the reference if this causes an error
video = new GSCapture(this, width, height, 30);
// list all the available serial ports
// change the 0 to the appropriate number of the serial port
// that your microcontroller is attached to.
myPort = new Serial(this, Serial.list()[0], 9600);
void draw() {
if (sensors != null) {
// if valid data arrays are not null
// compare each sensor value with the previuos reading
// to establish change
read1 = map(sensors[0], 0, 600, 1, 30);
cellZFactor = int(read1);
if (video.available()) {
// Size of each
// begin loop for columns
for (int i = 0; i < cols;i++) {
// begin loop for rows
for (int j = 0; j < rows;j++) {
// where are we, pixel-wise?
int x = i * cellSize;
int y = j * cellSize;
int loc = (video.width - x - 1) + y*video.width; // reversing x to mirror the image
// the rects' color and z position depends on the information from the sonar input,
// the brightness and the colors captured by the camera
color c = video.pixels[loc];
float sz = (brightness(c) / 255.0) * cellSize + cellZFactor;
fill(red(c)/cellZFactor,(blue(c)+(cellZFactor)), (green(c)*(cellZFactor)/3));
rect(x + cellSize/2, y + cellSize/2, sz, sz);
void serialEvent(Serial myPort) {
// read the serial buffer:
String myString = myPort.readStringUntil(linefeed);
// if you got any bytes other than the linefeed:
if (myString != null) {
myString = trim(myString);
// split the string at the commas
// and convert the sections into integers:
sensors = int(split(myString, '\n'));
// print out the values you got:
 for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {
print("Sensor " + sensorNum + ": " + sensors[sensorNum] + "\t");
// add a linefeed after all the sensor values are printed:

and here is the arduino code. This is arranged for sonars similar to the model HC-SR04.


//defining ports
const int pingPin = 7;
const int pingPin8 = 8;
void setup() {
void loop(){
long duration, inches, cm;
pinMode(pingPin, OUTPUT);
digitalWrite(pingPin, LOW);
digitalWrite(pingPin, HIGH);
digitalWrite(pingPin, LOW);
pinMode(pingPin8, INPUT);
duration = pulseIn(pingPin8, HIGH);
inches = microsecondsToInches(duration);
cm = microsecondsToCentimeters(duration);
long microsecondsToInches(long microseconds){
return microseconds / 74 / 2;
long microsecondsToCentimeters(long microseconds){
return microseconds / 29 / 2;

more details on how to hook a HC-SR04 to an arduino board here.

Posted in Mistakes

twitter data visualization

3D data visualization for the last twitter messages containing a certain term or group of terms. the visualization was made in processing, using a ‘compact’ version of the twitter4j library and the twitter API. this was proposed as an exercise for the workshop datajockey, that took place in the Museum of Image and Sound of São Paulo, Brazil, from may 17th to 31st, 2011.

it searches for a term in the last tweets stored in twitter database, and shows those tweets in a 3D space, the position and the color are given according to the time of the post. if two or more posts are close in time, a line that connects them in the space, forming structures. the size of the cubes are given by the minute of the post.

sorry if i can’t post the actual sketch here, it’s due to processing.js limitations. but the source code is below, you can try it for yourself. before you test this out in processing, you must download the twitter4j library file, and insert it in a folder called “code”, inside your sketch folder. also, you need to get a couple of keys to access twitter databank. to do that, register a new application in twitter developers page.

//twitter 3D data visualization
//by medul.la
//based on the sketch '3D Processing World', by Josue Page
//the twitter connection is made by using the twitter4j java library:
// Before you use this sketch, register your Twitter application at dev.twitter.com
// Once registered, you will have the info for the OAuth tokens
//setting twitter API info:
static String OAuthConsumerKey = "PUT YOUR CONSUMER KEY HERE";
static String OAuthConsumerSecret = "PUT YOUR CONSUMER SECRET KEY HERE";
static String AccessToken = "PUT YOUR ACCESS TOKEN HERE";
static String AccessTokenSecret = "PUT YOUR ACCESS TOKEN SECRET HERE";
//define parameters
//a word to search for in the tweets database:
String searchTerm = "PUT YOUR SEARCH TERM HERE";
//a number of tweets to work with (you can choose any number up to 100) :
int numOfTweets = 50;
java.util.List statuses = null;
Twitter twitter = new TwitterFactory().getInstance();
RequestToken requestToken;
String[] theSearchTweets = new String[numOfTweets];
Date[] tweetTimeData = new Date[numOfTweets];
String[] tweetTimeStrings = new String[0];
int[] tweetTimeInts = new int[0];
color[] colors = new color[0];
int[] coords = new int[0];
int objects = numOfTweets, zoom = -300, xCube, yCube, zCube;
Pts[] cubes = new Pts[objects];
color bgColor = 0, lineColor = 255;
float R, G, B;
PFont theFont;
void setup() {
  size(1024, 750, P3D);
  translate(width, 0, 0);
  theFont = createFont("Arial",1000);
  for (int i = 0; i < numOfTweets ; i++) {
    String t = theSearchTweets[i];
    cubes[i] = new Pts(coords[i], coords[i+1], coords[i+2], colors[i], 1, t);
void draw() {
  translate(width/2, height/2, width/2+zoom);
  rotateX(map(mouseY, 0, height, -2*PI, 2*PI));
  rotateY(map(mouseX, 0, width, -2*PI, 2*PI));
  background(bgColor, 50);
  for (int u = 0; u < objects ; u++) {
    for (int v=0;v<objects;v++) {
      if (abs(cubes[u].z-cubes[v].z)<200) {
        if (abs(cubes[u].x-cubes[v].x)<200) {
          if (abs(cubes[u].y-cubes[v].y)<200) {
            stroke(lineColor, 50);
            vertex(cubes[u].x, cubes[u].y, cubes[u].z);
            vertex(cubes[v].x, cubes[v].y, cubes[v].z);
  if (mousePressed) {
    bgColor = 255;
    lineColor = color(255, 0, 0);
  else {
    bgColor = 0;
    lineColor = 255;
class Pts {
  int x, y, z;
  float tem;
  color cubeColorC;
  String theText;
  Pts(int a, int b, int c, color d, float e, String t) {
  x = a;
  y = b;
  z = c;
  cubeColorC = d;
  tem = b/20;
  theText = t;
  void drawCubes() {
    if (mousePressed) {
      fill(0, 50);
    else {
      fill(lineColor, 50);
    text(theText, x+30, y, 100, 1000, z);
    translate(x, y, z); 
  void change() {
    if (x <- width) {
      x =- width + 10;
    else {
      if (x > height) {
        x = height - 10;
      else {
        x = x + int(random(-3, 3));
    if (y <- height) {
      y =- height+10;
    else {
      if (y > width) {
        y = width - 10;
      else {
        y = y + int(random(-5, 5));
      if (z > width) {
        z = width - 10;
      else {
        z = z + int(random(-5, 5));
      if ( z<- width) {
        z =- width + 10;
void keyPressed() {
  if (keyCode == 40) {
    zoom -= 300;
  if (keyCode == 38) {
    zoom += 300;
//twitter API functions
// Initial connection
void connectTwitter() {
  twitter.setOAuthConsumer(OAuthConsumerKey, OAuthConsumerSecret);
  AccessToken accessToken = loadAccessToken();
// Loading up the access token
private static AccessToken loadAccessToken() {
  return new AccessToken(AccessToken, AccessTokenSecret);
// Search for tweets
void getSearchTweets(String searchTerm) {
  String queryStr = searchTerm;
  try {
    Query query = new Query(queryStr);    
    query.setRpp(numOfTweets); // Get 10 of the 100 search results  
    QueryResult result = twitter.search(query);    
    ArrayList tweets = (ArrayList) result.getTweets();    
    for (int i = 0; i < tweets.size(); i++) {	
      Tweet t = (Tweet)tweets.get(i);	
      String user = t.getFromUser();
      String msg = t.getText();
      Date d = t.getCreatedAt();	
      theSearchTweets[i] = msg;
      tweetTimeData[i] = d;
      println("Tweet by " + user + " at " + d);
  } catch (TwitterException e) {    
    println("Search tweets: " + e);  
void convertDateToString(){
   for (int i = 0; i < tweetTimeData.length; i++){
    SimpleDateFormat df = new SimpleDateFormat("dd/MM/yyyy/HH/mm/ss");
    String s = df.format(tweetTimeData[i]);
    String sArray[] = new String [0];
    sArray = splitTokens(s, "/");
    for (int j = 0; j < sArray.length; j++){
      tweetTimeStrings = append(tweetTimeStrings, sArray[j]);
void convertStringToInts(){
   for (int i = 0; i < tweetTimeStrings.length; i++){
    int num = int(tweetTimeStrings[i]);
    tweetTimeInts = append(tweetTimeInts, num);
    println("tweetTimeInts at the index of " + i + " is: " + num);
void convertIntsToColor(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      R = map (tweetTimeInts[i], 0, 24, 0, 255);
      G = map (tweetTimeInts[(i+1)], 0, 60, 0, 255);
      B = map (tweetTimeInts[(i+2)], 0, 60, 0, 255);
      color clr = color (R, G, B); 
      colors = append (colors, clr);
      //println("color stored is = R " + red(clr) + ", G " + green(clr) + ", B " + blue(clr));   
void convertIntsToPosition(){
   for (int i = 3; i < tweetTimeInts.length; i = i+6){
      xCube = int(map (tweetTimeInts[i], 0, 24, -width, width));
      yCube = int(map (tweetTimeInts[(i+1)], 0, 60, -height, height));
      zCube = int(map (tweetTimeInts[(i+2)], 0, 60, -width, width));
      coords = append (coords, xCube);
      coords = append (coords, yCube);
      coords = append (coords, zCube);
      //println("position stored is = xCube " + xCube + ", yCube " + yCube + ", zCube " + zCube);   
void checkColors(){
   for (int i = 0; i < objects; i++){
      println("color stored is = R " + red(colors[i]) + ", G " + green(colors[i]) + ", B " + blue(colors[i]));
void checkCoords(){
   for (int i = 0; i < coords.length; i=i+3){
      println("position stored is = xCube " + coords[i] + ", yCube " + coords[i+1] + ", zCube " + coords[i+2]);
Posted in Mistakes Tagged with: , , , , , ,

a feedback on augmented architecture

here’s a few images from the workshop augmented architecture, which took place earlier this may. the workshop introduced a few concepts on projection techniques/language in order to experiment with visual perception concerning objects and environments.

augmented architecture on flickr augmented architecture on flickr augmented architecture augmented architecture augmented architecture

i would like to thank eduardo ricci for the pictures, and also everyone who has participated in the process of these workshops, it has been a great and constructive experience 😉

here’s the presentation i’ve used to talk a little about the history of projection, and contemporary aspects of what André Parente calls the ‘flee from the black box”, process in which i argue projection mapping is inserted in.

i’ll probably be posting a text i’ve already written concerning the subject, with a few revisions (and a translation to english and spanish). you can download the pdf with the original version here (in portuguese).

stay tuned.

Posted in Around Tagged with: , , , , , , , , ,

text data visualization

visualizing text data

i’ve uploaded a processing sketch in openprocessing platform, done as an exercise for the datajockey workshop. the sketch can be used to visualize textual data in different types of geometrical representations.

press “C” for circles, “S” for squares and rectangles, “A” for arcs and “L” for lines.

the above example is a visualization of a couple of paragraphs taken off the machado de assis’ novel dom casmurro.

you can check the source code and download the sketch here.


Posted in Mistakes Tagged with: , , , , , , , , , ,



“(…) while artists in all disciplines are now routinely computer as a tool in their work, there are still literally only a few artists out there who focus on one of the most fundamental and radical concepts associated with digital computers – that of computation itself (rather than interactivity, network, or multimedia).”.

– Lev Manovich, The Anti-Sublime Ideal in Data Art, p.6.

the concept of computation can be approached as the systematic and autonomous mathematization of communication processes. even tough the complexification of such processes is an undeniable problem, there is a fact – emphasized by Manovich – that is even more essential: the conversion of culture into one single nature of bits and bytes.

the workshop DataJockey tries to address some of these questions concerning computation processes, providing a space of discussion and experimentation about a possible image or aesthetics that dialogues with such problem.

in order to achieve that, the workshop proposes exercises with open source technologies Processing and Arduino. these tools will be used in simple examples of data conversion, so that one can experiment with an image that is real time generated, product of local circumstances (through the use of sensors and cameras) or of other contexts (network information).

from the 17th to the 31st of may, tuesdays and thursdays, in the LabMIS/Museum of Image and Sound of São Paulo.

Posted in Around Tagged with: , , , , , , , , , , , ,

augmented architecture

i would like to start the activities in this blog by announcing i’ll be at the espaço cultural trackers, in são paulo, during the 11th, 12th and 13th of may, working on the workshop arquitetura aumentada (augmented architecture), from 8pm to 11pm. the purpose is to provide an environment of reference sharing and experimentations about projection techniques concerning spatial perception. as a digital tool, we will be using the open source software VPT.

for inscriptions, please send mail to video@trackers.cx. more information at the trackers blog.

as a ‘warm-up’, below are images and videos from the last workshop i’ve worked on there, in december/2010:

special thanks to lina lopes and igor spacek for the videos and photos. for more images, please visit lina lopes’ picasa.

i will be posting here images and videos about this next 3-day meeting, so, stay tuned.



Posted in Around Tagged with: , , , , , , ,