Archive Page 3

I submitted my idea for a civic engagement game to the “Real-World Games for Change” design challenge, and won. The award is sponsored by Games for Change and Come Out & Play, and was the first time these organizations sponsored a challenge like this.

The organizations were looking for a game, designed by independent game designers, that is played in the real-world, be it on the streets or in a park that leaves the world a slightly better place. The critical part of the design challenge: to include a mechanic that drives players to leave an actual positive change on the physical environment where the game was played. They chose our mobile game, Commons, as the winner of the Real-World Games for Change Challenge 2011.

Towards the end of our development / testing phase, we were contacted by Susana Ruiz, a doctoral candidate at USC, as well as a G4C advisory board member, for a research study she is doing commissioned by Intel about mobile civic gaming.

Susana posed some really interesting questions, which I responded to this week.

1. Can you describe Commons – it’s basic gameplay, rule-set and goals?

Commons is a game for urban communities to improve their city through citizen stewardship.

With Commons, you compete to do good, while problems in your city get fixed. Report a problem or recommend an improvement in your neighborhood that you think deserves attention and resources, and show your city some lovin’. Go on short missions around town to earn bonus points, and unlock City awards to level up through the game. Get your ideas voted on by other players to win the game.

In Commons, share the things that you care most about fixing and improving in your neighborhood, and discover new ways to explore your city.

Game Rules

Players can go anywhere within the game’s geographic boundaries, south of Chambers Street, and can travel by any means necessary.

A “City Task” must describe a public place or issue that exists outside in the open air and exterior environs. Inside of buildings, vehicles, or in underground subway stations does not count in this game.

A “City Task” must include 3 things: a text description, a photo, and street intersection or location.

Players can choose whether to travel around as individuals or in groups; winners will be selected based on votes accumulated and will receive prizes.

2. You’ve had a playtest recently in NYC, how did it go? What key observations did your team make about how the players engage with each other, the rules, the city, and community members?

We had a great playtest in NYC on May 21st. We were worried at first that the game experience would be hindered by the fact that people had to share iPhones to play the game because not everyone personally owned an iPhone, but it turned out that there was more synergy and creativity exchanged between players when they played in pairs and in groups of three and four. As a result, we’ve decided to encourage people to play in small teams on game day.

We observed that people enjoyed having companions to bounce ideas off of, craft the wording of submissions together as a team, and share what they love about the neighborhood with each other. To our delight, the digital game almost became a sort of discussion starter, a launch pad, to get people talking amongst themselves about their city.

This underlines one of the original concepts of the Commons game, which is to get people to share and learn about what issues they share in common, and to form civic action groups in accordance with their common concerns and interests.

3. Would you say that location is important to the notion of being civically engaged and empowered? Do you think that mobile and location-aware gaming poses new or unique redefinitions of civics and activism?

Location is a very powerful marker, and oftentimes separates those who have access to information, resources, and power, and those who do not. That is why we still have wars between nation states over border demarcations, and why citizens dispute city and county boundaries, and why neighbors argue about tree lines. I believe that location is incredibly important to the notion of being civically aware and engaged, as it is the cornerstone of ‘belonging’ to a city. One of my favorite talks about the subject of human development and cosmopolitanism is Ethan Zuckerberg’s keynote from CHI 2011, “Desperately Seeking Serendipity”, in which he points out that the majority of our world’s population now lives in cities in part due to the fact that cities are powerful communication technologies where there is a ‘rapid diffusion of new ideas and practices to multiple communities’. This is a great feature of large cities, and I think that the more we as citizens can leverage this communal serendipity feature for our own progress and social good, the better we will make our world.

I think that location-aware gaming merely exposes that marker which we inherently know to be critically important in defining who we are and how we act in society.

The idea for the name of our game – Commons – comes from a sociological theory called “The Tragedy of the Commons” written by Garrett Hardin in 1968, which has been an issue of debate since Classical times. The ‘tragedy of the commons’ (per Wikipedia) is a dilemma arising from the situation in which multiple individuals, acting independently and rationally in their own self-interest, will ultimately deplete a shared limited resource even when it is clear that it is not in anyone’s long-term interest for this to happen. We believe that we citizens have a responsibility to avoid this dilemma, and we should all strive to protect and responsibly maintain our communal resources for the benefit of all.

The game is designed to foster an interesting dynamic, based on the theory of the Tragedy of the Commons, in which players are psychologically pulled between two disparate, yet intertwined worlds, where on the one hand they must compete to win the game and, on the other hand, they must cooperate to have a meaningful impact on their city environment. In the Commons game, players can’t do one without the other.

Excerpts from Ethan Zuckerberg’s keynote:

As of 2008, the majority of the world’s population lives in cities. In highly developed countries (the membership of the OECD), the figure is 77%, while in the least developed countries (as classified by the UN), 29% of people live in cities.

Cities are technologies for trade, for learning, for worship, but they’re also a powerful communication technologies. Cities enables realtime communication between different individuals and groups and the rapid diffusion of new ideas and practices to multiple communities. Even in an age of instantaneous digital communications, cities retain their function as a communications technology that enables constant contact with the unfamiliar, strange and different.

4. What do you think are some of the main challenges for designers wishing to create game-rooted experiences that engage a specific site, local inhabitants, and visitors?

I think some of the best “big games” are those designed for a specific site or locality that take into consideration the specificity of the geographic area and its inhabitants. One of my favorite games like this is called Pac-Manhattan, designed by an ITP alum a few years ago, for New York City’s Washington Square Park because the streets and intersections surrounding the park were arranged in a grid similar to the Pac Man software game grid.

However, one of the challenges in designing a site-specific game that I find most interesting is the issue of what you leave behind in the community and the environment after the game is finished. I like to design games that leave a positive impact on the site where it was played. I think this is an important aspect to consider when designing a big game, and especially one that is at the forefront of our collective consciousness in densely populated urban environments, like New York City.

5. What is your perspective on gamification? Proponents may argue that gamification involves the everyday and the urban in new, unexpected and empowering ways. Do you think that there is civic action potential? Do you think there are constraints or even dangers inherent in this trend moving forward?

I don’t think people need attractive game mechanics or dynamics to want to get involved in community service or civic activism, or any other sort of activity, but it does introduce an additional element of fun and competition, which I love, and also I am a really big believer in the social aspect of gaming. Doing activities in a thematic community, or mission-centered perspective, helps keep people focused on the objective while having fun and connecting with each another.

On the flip side, I think it’s pretty difficult to rely solely on gaming (external reward structure) as the primary incentive for getting people to participate in civic engagement or to join a cause – they have to care about it or want to care about it first.

I’m also incredibly interested in persuasive technology and design which is technology designed to change people’s attitudes or behaviors through persuasive interaction, not through force. I think there is an opportunity in designing the user experience of games to shape people’s attitudes and behaviors towards social impact.

6. Similarly, do you have a particular view on crowdsourcing – particularly in context to non-profit models, social change and participatory civics?

There is huge payoff in crowdsourcing information for participatory civics and social change, and this also goes back to the importance of location in cities because with crowdsourced data, you can start to see trends and patterns across the masses that may be geo-location specific.

The relationship between citizens and their government is changing. Technology gives citizens a different kind of voice than we’ve had in the past, making each individual’s input count, collectively gathering information from crowds, and connecting people to one another through social interactions that amplify their voices. In designing the Commons game, we wanted to build an e-citizen platform to turn the process of democracy into a game that has lasting social impact on the city and community.

Retroreflective Burqa was featured at the ITP Spring Show 2011 a few weeks ago. We had a great stream of people lining up to learn more about the strange looking blue & silver cape.

Source: Inhabitat NYC

I got a lot of great feedback from people, primarily asking when we were planning to field test the design, and when we were hoping to manufacture more of these garments! It was great to have such a diverse group of people interested in learning about the issues of veiling and garment design. Men and women wanted to try it on.

The Retroreflective Burqa was picked up by Inhabitat New York City in an article about the “Best Green Designs from the NYU ITP 2011 Spring Student Show.”

The design magazine, Core 77, featured the Retroreflective Burqa in its beautiful gallery from the ITP Spring Show.

Many people were interested in learning more, and sharing about the project with their friends. Chinita Nomada wrote a blog post about her experience seeing the Retroreflective Burqa at the show, and Shehab Hamad posted nice photos on a Flickr stream.

Thanks for coming out to the show!

Read about the inspiration for this project in these blog posts:

1. Research & Concept Development
2. Materials and Testing
3. Why Blue Color?
4. Sewing Pattern
5. Face Mesh Weaving
6. Fashion Hacking the Afghan Burqa

I just returned from CHI 2011 in Vancouver, Canada where I ran a “big game” conference-wide for 2500 attendees. (If you’re not familiar with CHI, it’s ACM’s Conference on Human Factors in Computing Systems, the world’s international conference of human-computer interaction.)

The social game is called BackChatter, created by Local No.12‘s members Mike Edwards, Colleen Macklin, John Sharp, and Eric Zimmerman. It’s a time-based, massively multiplayer game about Twitter trendspotting that is designed to be played at conferences and other events, like GDC (Game Developers Conference) and SXSW (South By Southwest).

To win at BackChatter, a player must have strategic thinking and socio-linguistic smarts. Really, perfect for human-computer interaction designers :)

This was the first time we played a big game like this at CHI, and my first time running a game this large for thousands of people. Turns out that Twitter is a pretty good gaming platform because it’s using what people are already talking about and what they’re looking at to learn the latest information.

People were pretty stoked about it, and despite the occasional wifi traumas at the Vancouver Convention Center (an absolutely GORGEOUS venue, I must add), we had a lot of excitement and participation throughout the week-long event.

People at the conference told me they had a lot of fun playing it, and they thought it was a good idea — here are a few sample tweets from players:

“Holy crap! I came in 2nd overall in the CHI game! #chi2011 It was super fun, a great idea.” –Jen Fernquist

“Hi, thanks for the Backchatter game, it was a fun exercise :)” –Tom Sommerville

My colleague at the Social Game Lab in Brooklyn set up a server to host our game instance on the domain, and ran the game through our “BackChatter at CHI 2011” website, directed by Professor Katherine Isbister who was the co-chair of the Games Committee at CHI 2011. There were 16 rounds in the game, and three overall winners, with @benki scoring an impressive 35,270 points for his chosen words, like “buxton”, “vancouver”, and “artifacts”.

Here’s a quick summary of how to play, but you can read more here.


1. Follow the game by sending this tweet:
follow CHI2011game

2. Pick your first three words by direct messaging the game with this tweet:
d CHI2011game word1 word2 word3


1. Score points and win prizes by guessing what everyone at CHI is going to be tweeting about.

2. Each round, pick three words that you think are going to be popular in tweets marked with #CHI2011.


1. You get points when other people tweet your words. The more players that bet on a word, the less valuable it is each time it scores.

2. Words score each time they’re tweeted. When a word appears in a Tweet marked with the hashtag #CHI2011 it scores its value for everyone who picked it.

3. The more bets on a word, the lower its value. The more people that pick a word for a round, the less points it will give you per Tweet. Less frequently picked words have a higher scoring value.

Note that your Tweets will never score for you. Common words like “the” and “at” don’t count.

We had our final class in Big Games on Friday. Our class headed out to Washington Square Park for some good ol’ fashioned play testing in the grass. The best game we played was “Battle Ninja”, based on the Ninja hand-slap game, that uses your speed, agility, and reflexes to attack and defend yourself against your enemies.

In regular Ninja, two people or more stand facing each other and countdown outloud “1, 2, 3, ninja.” After the countdown, each player strikes a ninja pose. BattleNinja group Player 1 takes one swift movement to try and slap player 2’s hands. Player 2 makes one motion to move out of the way. On the next countdown, player 2 goes to slap player 1’s hands, and player 1 tries to move out of the way. This continues until someone strikes the other person’s hand. Once a player attacks (or dodges their opponent’s strike) they must FREEZE in the position they left off.

The object of the game is to use your hand to hit your opponent’s hand. In this case, “hand” means fingers, thumbs, backhand, and palm.

Suz_BattleNinja In our version of “Battle Ninja”, the class of 20 people is divided into two teams of 10 people each side. Each team names a “king” who carries a styrofoam sword and frisbee shield, and the others are ninjas sworn to protect their king. Every player on a team wears a special cloth band to denote loyalty to their King. The two teams start the game on opposing sides of an open playing field, at least 20 feet apart. There should be an empty space in between, like chess.

The Kings take turn calling out “1, 2, 3, attack” for their team, and the game proceeds as usual, with each side taking turns attacking or defending. Players can attack whoever is near them (attack as in; slapping another opponent’s hands) with one swift motion. If a player’s hand is slapped, then that player must die on the battle field and strike a death pose in place on the field.

Our class had a lot of fun playing this game. Natalie_BigGamesWe play tested several rounds, and came up with different strategies to protect the King. One strategy is to form a box around the King so that players don’t move forward until absolutely forced to engage with the other team. Another strategy is for the King to move sideways, as far away from the other team as possible. (Kings are not allowed to move backwards because that makes it too difficult for the opposing team to catch ’em) Another strategy is to form a V-shape around the King, protecting the King from flank attack. Another strategy is to send assassins to kill the opposing King, while the others defend.

Photos courtesy of Michael Edgecumbe.

The Retroreflective Burqa offers a new, alternative design choice to the traditional Afghan burqa garment – a look that is safe, functional, and fashionable.

The Retroreflective Burqa project aims to empower burqa-wearing women by making carefully selected design changes and low-cost improvements to the traditional Afghan burqa, within the bounds of Sharia law, rendering the garment more versatile, functional, and fashionable.

The burqa is an outer-garment that the majority of Afghan women wear everyday, like a coat or jacket, which should be comfortable to wear, sensitive to the nuances of the everyday landscape and its environment, and culturally appropriate.
full length burqa still
By re-thinking certain elements of the garment, I hope to empower Afghan women through carefully selected design choices and to enable these women to remain true to themselves and to their religious and cultural practices. To empower a person is to provide her the opportunity to make choices and decisions regarding her life. In the case of the Afghan burqa, there is currently only one design available to women — the iconic, monolithic burqa. The Retroreflective Burqa offers an innovative, alternative design choice to today’s burqa, thereby giving Afghan women the right to make a choice, express their preferences, and exercise decision-making about a garment that they wear every day.

I hope that this endeavor will also open a dialogue around the issue of women’s empowerment and design. Helping women to make more informed choices regarding the various aspects of their lives, including selecting alternative clothing designs, may lead them to make more informed choices in other aspects of their lives such as health care, education, housing, nutrition, and economic development.

Retroreflective Burqa from Suzanne Kirkpatrick on Vimeo.

Dull silver color turns bright white at night in direct headlights:
Retroreflective Burqa - back

In Afghanistan, public safety on the roads and highways is very low; there are no traffic lights, no street lamps or overhead lights, no sidewalks or curbs, and very few paved roads. Few people have driver’s licenses, and fines for pedestrian violations are non-existent. Thousands of people die each year from roadside injuries, especially at night. Women wearing burqas are likely targets for roadside accidents because they appear as blueish gray silhouettes in the dark. Furthermore, the women cannot easily see oncoming traffic or obstacles in the road through their face veil, and they have difficulty reacting quickly to and protecting their young children from potential roadside hazards. In Afghanistan, there is also a lot of dust, which compounds the low visibility factor. Pedestrians become even less visible in dust clouds, and the dust further inhibits burqa-wearing women’s view of what is around them.

The Retroreflective Burqa project explores ways to improve this visibility problem in order to protect pedestrian women wearing burqas, so that they can be seen better by motor vehicles in the early morning hours, at dusk, and at night, without drawing unnecessary attention to these women from other pedestrians.

Face mesh is a powdery-silver color in daylight, but when lit by motorists’ headlights, turns white:
Retroreflective Burqa - front

I came up with this idea based on my experience and observations while living in Afghanistan. Read more about my cited references and sources of inspiration in my first blog post.
back diamonds
I’ve been inspired by haute couture designs by Louise Golden, and designer, Lela Ahmadzai. Also, I gained a lot of insights about materials and fabrics through visits to the Material Connexion library in NYC. At ITP, I followed Alex Vessels and Mindy Tchieu’s project “We Flashy”, using retroreflective material on contemporary clothing.

There are several interesting online videos about the Afghan burqa and its cultural significance, including this short video by Brishkay Ahmed which I found particularly interesting.

Some books I’ve particularly been inspired by are the following:

1) Veiled Sentiments: Honor and Poetry in a Bedouin Society by Lila Abu-Lughod.
2) Emma Tarlo’s Visibly Muslim. Chapter 7 discusses Cindy van den Bremen who designed the “Capster”.

Read about the background and inspiration for this project in previous blog posts:

1. Research & Concept Development
2. Materials and Testing
3. Why Blue Color?
4. Sewing Pattern
5. Face Mesh Weaving
6. Fashion Hacking the Afghan Burqa

What’s Next?
Although this burqa design is based on requirements that I collected in interviews with women who wear burqas, it has not yet been “field tested” in Afghanistan. I would like to send this garment to Afghanistan with some of my colleagues so that women and their families can see it, wear it, and provide feedback about the design and wearability of this garment in country.

I’ve always supported the idea of Afghan run businesses and handicraft organizations, such as Turquoise Mountain and Kabul Dolls and various others, that promote products and art for Afghans, made by Afghans. If this retroreflective burqa were to gain enough interest and support in country, I think it would make for an excellent women’s-run business in Afghanistan, with all of the profits going to Afghan women entrepreneurs. It would require research into manufacturing garments in Central Asia and how to source materials in China and nearby economies.

Another thing I’d like to do is make a version in black color fabric with retroreflectivity, using the light-weight material that women use from Saudi Arabia.

Primate Biologist, Tony DiFiore, tracks monkeys in the Amazon rain forest in Ecuador. At present, he is using the Telemetry Solutions RS 4000 GPS collars to track the monkeys’ locations at specific times during the day / night. The biggest difficulty regarding the retrieval of GPS data from these collars at present is finding the monkeys in the forest and positioning the antenna for a successful download. Currently, Tony and his research assistants must track the collared monkey via radio telemetry, and then stand within 10-20 meters of the monkey in a heavily wooded area with dense foliage and moisture, point the antenna towards the direction of the animal, and then manually press a button on the Telemetry Solutions software to activate download of the GPS data.

We want to make this data gathering process easier for primate biologists who track monkeys in the field. We know that the monkeys come to the salt lick a few times per week, and we know that the camera trap could have a motion sensor connected to a microcontroller to sense when an animal has passed by the salt lick. Our project idea is to set up an Automated GPS Data Downloading base station inside the camera trap that will automatically download GPS data from a monkey’s collar when it passes by the mineral lick.

Photo of RS 4000 GPS collar:
monkey collar

Tali Blankfeld and I decided to explore two paths to solving this problem, in order to see which solution would be the most efficient and effective.

Solution 1: USB Bus Pirate

Our first path was to investigate using a USB Bus Pirate, a universal serial interface tool, to “sniff” the GPS data incoming from the collar via the UHF antenna to the PC.

USB Bus Pirate

The Bus Pirate connects to a PC USB port. We can program a few commands to the Bus Pirate from a serial terminal on the PC, such as “read data” and “write data” that will make a copy of the GPS collar data being sent to the Telemetry Solutions software program.


There is a standard set of bus pirate communication commands that we looked at to set the mode to UART (asynchronous serial), select the data display format, etc.

We ran into a block when we needed to set the PC side serial port speed, because we didn’t know the baud rate (bps) that the Telemetry Solutions base station is using to download data via USB. We asked Telemetry Solutions for this information, but they didn’t respond to our request.

If we had known the baud rate, then we would have tried to use some of the bus interaction commands, such as bus start / stop conditions, read byte, write values, and repeat.

Solution 2: Batch Script / Robot Class in Processing

Upon discovering that Telemetry Solutions had a new product similar to what we were interested in building, we contacted Matt, their engineer, to inquire about how this product works.

Read about Telemetry Solution’s Automatic GPS data recovery system product.

The main difference between their product and our idea is that we would like to conserve power and lower the cost of the download unit (which currently costs $2,500) by having it function through a motion detector, so that it would only be using power once motion is detected. Telemetry Solution’s system is automatically set up to ping on a constant basis, continually searching for collared monkeys and therefore having to use a lot more power. To top it off, their automated software costs an additional $500.

In short, we were interested in designing an affordable product that would conserve power by using a motion sensor to detect when any animal has passed by the mineral lick, which would then trigger Telemetry Solution’s Automatic GPS data recovery system, and then to detect if any of those animals were the collared monkeys whose data we are trying to capture.

Our solution was to write a batch script, similar to the “Automator” Apple scripts in the Mac operating system. According to Wikipedia, “In DOS, OS/2, and Microsoft Windows, a batch file is a text file containing a series of commands intended to be executed by the command interpreter… When a batch file is run, the shell program (usually COMMAND.COM or cmd.exe) reads the file and executes its commands, normally line-by-line. Batch files are useful for running a sequence of executables automatically and are often used to automate repetitive or tedious processes.”

Screenshot of Telemetry Solutions collar software:

TS Collar software

The “download” button that we’re trying to hit with the robot class:

download button

When we contacted Matt at Telemetry Solutions in regards to the Automatic GPS data recovery system, we asked him if there would be a way to build a batch file that talks to their software code for the Automatic GPS data recovery system. We also wondered how much power this system requires and how it’s powered. Finally, we asked about the range of the base station.

Telemetry Solutions eventually responded with the following answers to our questions:

    – 32mA is the base station current drain during a remote download.
    – The Auto Download software has a settings.csv file. You can probably affect the settings.csv file with motion sensors to control the software which controls the base station
    – The base station is powered by a battery, not getting power from the computer’s USB.
    – No, the only difference between the base station that Tony uses now and the automated system isn’t just the software, we need to modify the base station, it only takes us 1 day to do that. So it needs to be sent back.
    – The range of the base station is not affected by the auto download software, it will work at the same range whether being controlled manually or by the auto download software.

In order for you to obtain the software you just need to return the base station and we will make the modification. The software sells for $500.

When researching different methods for how to automate the software, we considered using Girder early on in our project. An automation software, Girder seemed relatively easy to use at first, although we soon realized it would not be ideal for this particular project once we explored it further and realized it was actually quite complex. Instead, we decided we should stick to a program we are already familiar with–Processing.

Once we moved on to Processing, we became very interested in how the Robot Class could be used to locate specific X and Y coordinates / pixels on the screen that would target the “download” button in the Telemetry Solutions software.

The Robot class makes it possible for your Java program to temporarily take control of the mouse and keyboard input functionality at the operating-system level.

Some of our classmates in the Assistive Technology course worked with this java class for another project involving the Kinect to open iTunes software, which proved to be successful–having looked at their example, we are excited at the possibility of using the it for our project as well.

Aside from using the Robot Class to automate the GPS software, we are interesting in how the Robot Class can be used to “wake up” the unit, so that power can be conserved when motion is not detected, and data is only acquired when necessary. After some investigation and help from an ITP resident, we figured out how to use the robot class to wake/sleep the computer after each automated download.

Here’s a sample of our code using the robot class to wake/sleep the computer when motion is detected by a sensor connected to an Arduino microcontroller. We used a counter as a substitute example for what would be incoming serial data from a sensor connected to the Arduino, so that every time the counter reaches 100, the mouse moves.


//import libraries
//use robot class to wake computer up/put it to sleep
//trigger robot class w/ incoming serial data from motion sensor (using a counter in this example)

import java.awt.Robot;
import java.awt.event.InputEvent;

Robot r;
int xCoor = 400;
int yCoor = 0;
int count = 0;
int counter = 0;
void setup() {
size(950, 500);

//create robot instance – our friend Erica told us we should include this part!
try {
r = new Robot();
catch (Exception e) {

void draw() {

//trigger mouseMove

// if(monkey passes in front of motion detector)
// {
// if (button pressed ==false) {
//r.mouseMove(x coord, y coord);
// }

//substitute counter for sensor
if(counter == 100){
counter = 0;
r.mouseMove(800/count, 300/count);

void keyPressed(){
if (key == CODED) {
if (keyCode == UP) {
yCoor –;
r.mouseMove(xCoor, yCoor);
} else if (keyCode == DOWN) {
r.mouseMove(xCoor, yCoor);


While both of our solutions are good attempts at making our own custom solutions, we were able to definitively prove that the robot class can wake up the computer from sleep when motion is detected and sleep the computer after each automated download has completed. For now, this seems like the most viable solution to pursue, and deserves further investigation in Windows OS.

In sum, we were fairly limited in our project experimentation by the gaps in our knowledge about how the Telemetry Solutions automated base station is configured. Had we known more about the product’s power system and general power consumption, and the specifics regarding the base station software, we could have gone deeper in our pursuits to build a solution that would really integrate with the Telemetry Solution RS 4000 GPS collars.

For our Big Games class, my group is going to build an ARG (Alternate Reality Game) that presents an unfolding narrative told through fragmented video and hidden puzzles on the web. Our ARG encourages crowdsourcing techniques among players to inspire them to help a fictional character solve a serial murder case wrapped up in a government conspiracy :-)

My classmates and I were inspired by the alternate reality game, “The Beast“, created by Microsoft in 2001 to promote Steven Spielberg’s film with Dreamworks “A.I.: Artificial Intelligence”, which spurred the creation of the Cloudmakers group on the web that persisted long after The Beast game was finished. The setup for our game was also inspired by the book Pattern Recognition, by William Gibson.

In our alternate reality game, players will enter the game through a series of short video clips found around NYC in QR codes that tell the story of Amy Toloni, a forensic toxicologist recruited by the NYPD to consult on a mysterious serial murder case. However, only shortly after being asked to provide assistance, Amy is suddenly dropped from the case, and all evidence has disappeared as though the murders never happened. Unable to forget about the case and with all the signs pointing toward a conspiracy, Amy launches her own investigation. But when she finds herself being followed and her apartment broken into and trashed, she realizes she must recruit help, and leaves behind the video clips of her story for ordinary people to find, in the hopes that they will become intrigued and find ways to assist her. Players will interact with Amy through existing social media such as Facebook, Twitter, Tumblr, blogs, and online forums, where Amy will leave encrypted information about what she has managed to uncover thus far about the unfolding case. It will be up to the players to put the pieces together and help her solve the mystery — which may reveal truths that no one could have ever imagined.

The project is both an experiment in storytelling through fragments, and a foray into the world of alternate reality games and how to construct artificial experiences that seamlessly blend into players’ own realities. We want to convey the “This is not a game” mentality by creating a fictional person who seems as real as possible. We will accomplish this both by proliferating a holistic online identity and by telling her story in an engaging and familiar way. The game is intended to be played over a long period of time, depending on how long it takes players to unlock clues and solve puzzles.

The game will consist of the following:
– A series of 8-10 video clips that tell one longer narrative, uploaded to different YouTube/Vimeo accounts with entirely different names, and hidden throughout the city using QR codes
– Different physical and electronic artifacts related to the subject matter of the game and its main character, distributed throughout NYU and across the web on different web sites
– A fictional character, Amy, whose identity is propagated online through a personal website, blog, Facebook account, Twitter, etc.
– A series of posts on forums related to aliens and government conspiracies by Amy
– A pre-made set of puzzles and challenges that will be issued to players by Amy once they begin interacting with her in real-time

Oh yeah, and if you were wondering about the name of our game — A.L.T. — we named our character Amy L. (short for Lucia) Toloni whose initials spell out “A.L.T.” So, we thought “A.L.T. the A.R.G.” sounded kinda catchy.

The “making” process involved several phases, including selecting the kind of fabrics I wanted to use, drafting measurements from the original garment, creating a mockup of the product, sewing everything together, and finally decorating the garment with reflective materials.

I went to Mood Fabrics store in New York City’s famous garment district to find fabric. I also visited Material Connexion and P&S Fabrics in SoHo, but Mood was the best place for what I needed. I spent 4 hours wandering down aisles at Mood, and finally decided on a corn blue color jersey rayon material for the main bulk that would breathe and cool the body much better than the existing rayon. I got a silver pleated polyester for the cape inserts, which is very light-weight.

mood fabric

I draped both of these materials in the store and discovered that they hang and flow nicely with a person’s gait. I also found a beautiful polyester silk blended silver and blue fabric that is embroidered with diamond shapes, which matches the retroreflective designs well.

mood draping

Next, I met with Melody to cut the fabric from our pattern that we had just made. It was fun to have a production partner.

cutting fabric

She sewed the pieces separately in stages, beginning with the back blue panels and silver triangle inserts, and then the crown and headband, and finally the front piece and face mesh. Fortunately, the jersey rayon fabric isn’t too stretchy or flimsy, so it was relatively easy to sew. The silver pleated fabric was much thinner and slippery.

sewing in progress

We met several times to create mockups of the pieces in order to see what they would all look like together. ITP has a dressform on the floor, which I used to visualize the various elements of the design by pinning everything on the dressform.

mockup 1

This way, I could see how the different elements complemented each other and fit together.

dress form mock up

We took a photograph with flash to visualize how the retroreflective strips and face mesh would look as part of the overall garment. Not bad!!

retroreflective photo

I like how the silver inserts turned out in the back cape. We draped the garment on the dressform to see the outline of the cape and how the silver inserts hang.

mock up

The trickiest part of the production was making the head band and crown piece. The original burqa’s head piece is too small, and I wanted to make a garment that *most* people could try on and wear, so it was necessary to design a better sized head piece. Melody used muslin material for mocking and it worked great because we could mark on it.

head piece

Based on our original estimations, we first attempted a 22″ head band with 8.5″ top, but it was too small and not quite deep enough. Then, Melody tried making slightly larger sized crowns of 23.75″ and 24.5″ with 8.75″ pleated top, but those didn’t sit down far enough and eventually slipped due to the slippery jersey fabric. She also tried sewing a 24.5″ band with a 9″ pleated top which sat well, but also slipped.


To prevent slipping, Melody sewed a light-weight lining between the blue material and the head. Finally, we decided to try a 23.5″ band with 2.2″ band height, and a 9.25″ dome. I tested this model on several women at school and confirmed that this was the perfect size for the average woman.

gabby burqa

As part of this user testing, I also took measurements of people’s faces to calculate where we should place the face mesh panel on the front of the garment.

mouth measurement

As suspected, the range was roughly 5″ x 5″. Here are the measurements:

34.00mm / 1.35″ width of each eye
120mm / 4.7″ width from far edges of eyes (just inside the temples)
70mm / 2.77″ height from nose tip to chin (nose & mouth breathing area)
44mm / 1.75″ height from band seam to top of eyes (forehead area)
115mm / 4.7″ height from low brow to bottom edge of chin
47mm / 1.8″ width of mouth

Next steps:
1. Seal the face mesh piece, and attach it to the front.
2. Sew the retroreflective strip designs.

Read about the background and inspiration for this project in previous blog posts:

1. Research & Concept Development
2. Materials and Testing
3. Why Blue Color?
4. Sewing Pattern
5. Face Mesh Weaving

I decided to weave my own face mesh material for my burqa garment using 1/23″ wide retroreflective strips. The strips are flat, and very flexible and supple, so they are easy to manipulate and weave together. And I think this material will not be abrasive against people’s faces when they are wearing the burqa. The retroreflective material is made by 3M.

Metlon strips

I built my own weaving “loom”, which is essentially just a grid made from a piece of flat wood and nails. I made the size of the loom approximately 5 x 5 inches, based on a woman’s average face measurement (width of eyes and height from eyes to mouth).


To make the mesh spacing fairly narrow, I hammered the nails in a straight line close together, approximately 1/20″ apart.

mesh 1

Then, I tied each retroreflective strip in a knot around each nail, and started weaving the strips in a basket weave, like making a woven lattice pie crust. This process took many hours manually, but could be automated with a machine. I enjoyed making this mesh with my own hands. It was very therapeutic.

closeup mesh weave

Here’s a video of my weaving:

Next, I need to test this face mesh on the actual garment, and observe whether people can see through it. I also want to see how it reacts when light is shining directly on it.

Read about the background and inspiration for this project in previous blog posts:

1. Research & Concept Development
2. Materials and Testing
3. Why Blue Color?
4. Sewing Pattern
5. Face Mesh Weaving
6. Fashion Hacking the Afghan Burqa

By their very nature, monkeys are incredibly mobile. They are a self-organizing network. They are the perfect self-forming “mobile nodes”.

How can we design according to wildlife constraints, and use these constraints to our advantage?

I’ve been thinking lately about how to use monkeys’ natural advantages and strengths for data sharing purposes. I’ve come up with a great idea for my final project in our “Wildlife Observation and Monkey Tracking” class that leverages primates’ existing behaviors: their mobility and consistent roaming patterns in the forest, their frequent social interactions with one another, and their loyal returns to the salt licks where camera traps have been placed.

I would like to design an asynchronous mobile data mesh network and communications protocol for monkey radio collars that will tell primate biologists 1) where the monkeys travel, 2) when and 3) how often monkeys come into proximity with other collared monkeys, and 4) where and 5) for how long these social encounters occur.

What is a data mesh?

A data mesh is a distributed storage network that uses a synchronization system to update copies of data sets between two or more devices, according to a set of communication protocols established among the devices.

In the “Mobile Monkey Mesh” that I would like to design, each collared monkey carries a recording, storage, and communications device (in this case, a radio collar) attached to its body that serves as a “mobile data node”. Each device records and stores GPS data about the monkey’s location and proximity event* data whenever that monkey comes into contact with another collared monkey. The camera traps and salt licks also each house a recording, storage, and communications device inside the camera casing that serves as a “stationary data node”. Each of these devices also records and stores proximity event data whenever a monkey comes near the camera trap. So, for example, if we have 8 collared monkeys and 4 camera traps, then we have a total of 12 data nodes in our mesh.

Whenever one of these storage devices comes in proximity of another storage device, the data being locally stored on each device is shared across devices. If the data has already been synced before in the past, then only the changes made in each device since the last synchronization will be exchanged between devices, and information about the date / time of each sync is recorded to both devices. In this way, the data captured about many monkeys is shared and distributed across multiple devices, mobile and stationary, in an asynchronous, opportunistic fashion.

By syncing data from each other on-the-fly, the monkeys are actually doing most of the legwork required for data logging and collection in the depths of the jungle, which hopefully would save biologists a lot of time and energy otherwise spent gathering this data from one tranquilized monkey at a time.

What is used in the field today?

The monkeys’ roaming range is anywhere between 4 hectares (200x200m) to over 600 hectares (2×3 kilometers). Some of the monkeys have a very small geographic range in which they roam, and others have a much larger geographic roaming range.

At present, primate biologist Anthony Di Fiore and his team use three different types of radio collars to track the physical movements and behaviors of 10 local species of monkeys in the Yasuní Biosphere Reserve in the Ecuadorean Amazon forest:

      traditional radio transmitter collars,


      collars with radio and GPS transmitters,


    collars with radio transmitter and proximity sensor

All three of these solutions work fairly well individually, but they require the biologist to locate a specific Monkey “A” via radio telemetry (which could take several days), retrieve that specific Monkey A’s data, and then repeat the process for all collared monkeys in the study group which is very time consuming, physically exhausting, and, from a data collection standpoint, inefficient.

Asynchronous mobile data mesh network

What if it were possible to retrieve Monkey A’s data from Monkey “B”? Furthermore, what if it were possible to not only retrieve Monkey A’s data from Monkey B, but to retrieve the most recent data for Monkey “C”, Monkey “D”, and Monkey “E” all at once just by finding any given monkey in the group who happens to be near you?

To do this, we would need a single collar that incorporates all three of these technologies of radio, GPS, and proximity detection, with a simple communications and data syncing protocol that sends and receives data in a point-to-point relay between nodes whenever those nodes come into physical proximity with one another.

Monkeys as mobile relay nodes

One of the biggest challenges about communications in the jungle is our limited access to cellular towers; it’s very difficult to establish traditional communications networks or wifi clouds that normally enable us to sync data across a network cloud. At the Yasuní Biosphere Reserve, there are currently no cell towers anywhere within roughly 10 km of the Tiputini Biodiversity Station.

I’d like to explore a way around this dilemma by designing a mobile mesh topology ecosystem that relies on the monkeys themselves as mobile relay nodes, plus the placement of a few stationary nodes at the salt licks / camera traps or other commonly known waypoints.

In this way, as long as a given node, Monkey A, can reach another node, Monkey B, that is acting as a router in the mesh, then node “Monkey A” remains in the network. This is particularly useful for monkey tracking, since some animals wander further than others, and not all of them wander together in a pack at any given moment, but most all of them interact socially with each other from time to time.

In addition, if a hub node were established at a salt lick, a place where we already know the monkeys like to come to snack, then we could even write a config.xml file on that hub node that would update the other mobile nodes when one monkey came to the salt lick, because that monkey’s updated device would then carry the updated config file to the other monkeys whom it came in contact with.

How would it work?

Each collared monkey would have its own unique electronic identifier, radio and GPS transmitter, and 4GB or 16GB mini-SD card storage capacity. Each monkey’s device logs GPS data for that individual according to a prescribed schedule (e.g. every hour or two), so as not to waste power, and each device broadcasts a unique ID code (ping), while simultaneously listening for others nearby.

When Monkey A comes within a set distance of Monkey B (typically as determined by the radio range of RFID devices attached to the animals, e.g. 10 meters), Monkey B’s receiving device will record a date / time stamp of when the encounter occurred, how long the encounter occurred, and the ID code of Monkey A’s transmitting device. Monkey A’s device will also listen and record the same. When this event occurs, it will then trigger a data sync protocol between the two mobile nodes, such that Monkey B’s receiving device will ask Monkey A’s transmitting device if it has any data stored that is different than its own data set. If so, the delta in the data sets will be bi-directionally shared via radio electric pulses. Like the example above, Monkey A’s device may be a gold mine! because it stores the most recent data for Monkey “C”, Monkey “D”, and Monkey “E”, so Monkey B’s device will receive the entire updated data set all at once without ever coming into contact with the other monkeys in the group.

The strength of this design is that the data is not lost if one animal is lost or suddenly becomes untraceable. The data sharing architecture in this system is redundant, so that copies of the data are stored at each end point. I think this flexibility is very important in austere environments where you cannot always depend on a fixed, central point.

Here is an architecture diagram to show how this works:

Mobile Monkey Mesh

Diagram: Mobile Monkey Mesh

Sources of inspiration

In 2006, after returning from Afghanistan, I helped run an international humanitarian disaster-response exercise called Strong Angel III that focused on experimentation in the use of cutting-edge techniques and technologies to facilitate improved information flow and cooperation across the civil-military boundary in post-disaster and post-conflict field environments. As part of this event, the SA-III team experimented with the “Pony Express“, an asynchronous data mobile mesh system constructed from a 4WD Jeep with wifi router and Groove mobile relay server. They drove a route twice each day that allowed the formation of a temporary wi-fi cloud for 2000 feet around the car, thereby syncing all waiting information in Groove workspaces between the relay server in the vehicle and the users within the temporary cloud.

This gave me the idea for a Mobile Monkey Mesh!

"Pony Express" in Hawaii

“Pony Express” in Hawaii

Based on the Pony Express model, Microsoft built and released FeedSync, which combines subscription models like RSS with data synchronization. FeedSync then evolved into Mesh4x which builds a data mesh that allows for two-way synchronization of information in a peer-to-peer symmetric way. Microsoft also released Live Mesh, based on FeedSync technologies, that allows files, folders and other data to be shared and synchronized across multiple personal devices and up to 5 GB on the web. Google has also developed something similar called Table Cast.

Power consumption

Our worst enemy in the field is always power. How much power is needed to run this asynchronous mobile data mesh network? That is what I need to research, because that will really determine the sustainability and reliability of this design.

There are a couple of ways to “trim” our electrical waistlines in this system. One, I already mentioned above, is to prescribe a fixed schedule or interval for GPS retrieval and proximity pinging, so that we’re not constantly draining the battery power.

Another power saving method is to prevent ‘memory flood’ on the data storage devices by setting the data sync process to only run if the proximity event* has occurred more than one or two hours ago. This will help reduce power consumption and redundant logging. The Sirtrack Proximity Collar (Proximity E2C Logger), currently used by Di Fiore in the field, works in this same way to anticipate the problem of memory flood when animals den together or when three or more animals meet at once.

Other limitations

Some potential limitations to consider are related to the proximity sensing devices. When we ran some small tests with the Sirtrack collars in class last week, the results were only semi-accurate both times. One of the Sirtrack collars seemed to record data more accurately than the other. In 2009, ITP students Carolina Vallejo and Kenny Chiou tested proximity event detection using infrared sensors. Based on their tests, they observed that the IRDA sensors may need a direct line of sight. Therefore, not all proximity interactions will be recorded. My suspicion is that this could be the case for the RFID sensors as well.

I found a research study online “New Radiocollars for the Detection of Proximity among Individuals“, conducted on raccoons in 2006 by Suzanne Prange and Trevor Jordan. The findings were published in the Wildlife Society Bulletin. The researchers found that the “recorded contact duration deviated from actual time by ≤3 seconds for short-duration (10–300 sec), and by ≤30 seconds for extended-duration (8–14 hr) contacts recorded as a single event. There was a tendency for the collars to record extended-duration contacts as multiple events, with the frequency dependent on settings.” And they go on to say, “We downloaded 35 of the 42 proximity detectors deployed on free-ranging raccoons. Of these, approximately 57% were functioning properly, 9% exhibited problems apparently correctable in the field, and 34% exhibited problems not correctable in the field.”

What kind of data can we get with the Mobile Monkey Mesh?

Location data for an individual’s position:

  • longitude
  • latitude
  • altitude
  • date / time stamp (month, day, year, hours, minutes, seconds)

Event data for two or more individuals:

  • unique IDs
  • frequency of proximity
  • duration of incidences of proximity
  • location data for two or more individuals triggered by a proximity event (longitude, latitude, altitude, date / time stamp)

Synchronization event data:

  • unique IDs
  • date / time of sync event
  • error log (sync failure)

What hardware is needed for this collar?

The current Sirtrack Proximity Collar (Proximity E2C Logger) weighs 45 grams and lasts 170 -200 days (6 – 8 months) with VHF turned on. VHF frequencies available from 148.000MHz to 173.999MHz. The collar is made of leather and generally is placed around the monkey’s neck, with a width of 10 mm and circumference range of 160-200 mm. This device already contains an RFID transmitter / receiver, radio transmitter, real-time clock, battery, and storage container, so I would need to add the GPS component and the data sync protocol code.

Traditional radio collars have a duration of 18 months – 3 years, depending on battery size. However, adding the GPS transmitter reduces the field life to approximately 8 months (that is using a very intermittent schedule attempting satellite fixes only every 1/2 hour, during a 12-hr daytime). Given that the proximity collars are also said to last 6 – 8 months, I would venture to guess that this Mobile Monkey Mesh collar would last approximately 6 months in the field.

If I were to design a DIY collar myself, then I would need the following:

  • radio transmitter
  • GPS transmitter (30-45mA, 5-9g)
  • RFID transmitter / receiver
  • real-time clock
  • microcontroller
  • storage and logging device (OpenLog is 2mA idle, 6mA at maximum recording rate)
  • battery

*A proximity event is where two animals are determined to be within a predetermined minimum distance of one another, which will typically vary between different species.