From 'Fresh Kills' to 'Freshkills'

On Sunday, October 3 I visited the Freshkills Park site on Staten Island. It took 2.5 hours of transit to get there from the East Village, including subway, Staten Island Ferry, the S62 bus and, finally, a shuttle bus provided by the park. Kind of a pain to get there, but by the time the park opens I’m sure that part will be figured out.

Freshkills is an NYC park, but feels worlds away from Prospect Park or Central Park – and with good cause, because Freshkills Park was once the Fresh Kills Landfill. Once complete, Freshkills will cover 2,200 acres and will be the largest of New York City’s parks, almost three times the size of Central Park.

There’s lots of history beneath your feet in Freshkills Park. Beneath the park’s four hills are 150 million tons of garbage, much of which was collected from New York residences between 1948 and 2001. There’s a good chance that my grandparents’ trash is in there somewhere, alongside the trash of all the other New Yorkers (and those who visited) between those years. To add to the park’s history, materials from the World Trade Center site were also disposed of at the Fresh Kills Landfill.

One of the park planners led our tour up the North Hill and discussed the plans for the park’s development, as well as how the landfill gas and leachate emitted by the decomposing trash are being recovered and processed.

I loved hearing about the place’s past, and really look forward to seeing how it comes together in the next 20 years or so – by which point we’ll hopefully be able to teleport ourselves there from Manhattan.

Below are some pictures from the open day at Freshkills Park. Click on the thumbnails for a larger image.

 

ITP411: an SMS-based phone directory

It’s often lamented around ITP that there isn’t a master phone directory that allows students to contact each other easily via phone. Email addresses are usually simple to find, but sometimes email is neither the fastest nor easiest way to reach other students directly.

In the past, students have attempted to create a Google Doc, or master list, to which all students can add their phone numbers, but this solution has proved ineffective in terms of participation because such web-based solutions make the phone numbers both accessible and inaccessible at the same time. When one is using a computer to access the Google Doc, all the numbers are easily accessible in an open-book format, making it easy for someone to sit down and copy all the numbers at the same time. However, such documents prove inaccessible when people are on the go, accessing the web from their phone. It can be difficult and time-consuming to dig up a web-based document, let alone trawl through it, when all you’re looking for is a single phone number.

With this in mind, Noah King and I created an SMS-based ITP phone directory that allows students to search for specific names and phone numbers via text message.

We called our service ITP411 and launched it last night. ITP411 utilizes PHP, MySql and TextMarks, and enables users to do the following:

  • Add their name and phone number to the directory.
  • Search for names in the directory. They can do this by putting the “@” symbol in front of the name they are searching for.
  • Edit their listing in the directory. To edit a listing, users put the “!” symbol in front of the new name.

About 25 out of 230 ITP students have already signed up for ITP411 – not bad for the first 24 hours! We’ll continue to improve the service as we receive feedback.

Sensor Readings, In and Out

This week, I plugged myself into my Arduino and monitored a number of readings to see how my body reacted to events over a period of one hour.

I turned off the air conditioner and monitored readings from the following three sensors while sitting with my laptop:

  • Galvanic Skin Response Sensor, consisting of two pieces of copper soldered to wires, that I held my left index and middle fingers on to measure how much I sweat. I was pretty surprised and amused to see how the readings changed over time.
  • Temperature Sensor, to record temperature in my environment. These readings increased while the A/C was turned off.
  • Photocell, to record ambient light. Unsurprisingly, this didn’t change much. I didn’t move, it was dark outside, and the indoor lights provided a stably-lit environment.

The info was all fed into Processing to create a graph, and all the readings were recorded into a text file over the course of the hour. It wasn’t the most exciting experiment in the world, but it was good to get back into the swing of using an Arduino.

Next, I’d like to play around a bit with a heart rate monitor to see what gets me (or someone else) going. Or maybe I should get myself to the gym…

 

 

Once upon a time, I lived on the Upper East Side…

I’d been meaning to visit a museum or three for some some end-of-summer inspiration before returning to ITP. This semester, Cabinets of Wonder provides a second chance, along with a gentle, much-needed kick in the posterior, to do this.

On Monday, I headed to the Upper East Side to check out Cooper-Hewitt, National Design Museum, Smithsonian Institution and The Jewish Museum.

Cooper-Hewitt, housed in the grand mansion of Andrew Carnegie, is described on its website as “the only museum in the nation devoted exclusively to historic and contemporary design.” At the museum gate, a sign announcing the National Design Triennial: Why Design Now? exhibition greeted me in Clearview Hwy typeface, along a circular steel NYC bike rack that I would soon see on display inside (which may or may not have been for show). The building’s exterior and interior details appear well-preserved, save for a conservatory area housing the museum’s cafe.

A guard said “good morning” as I walked into the building. The receptionist’s words were “ten dollars” and “no maps, there are only two floors.” In the first exhibition room, I was startled by a loud yawn from the guard in the doorway. The other guards were pleasant.

The information plaques at Cooper-Hewitt are placed about a foot above the floor, which made it super-easy for children to read. I was the youngest visitor I saw that day, though – the other visitors, groups of 2-6 people 60+ years of age, along with a few middle-aged women walking around by themselves, were hunching over to read the information pertaining to each piece. The pieces were accessible designs that I think made connections with people who might not ordinarily recognize design in daily life – Etsy, ClearView Hwy and NYT information graphics are all part of the current exhibition.

One piece stood out, perhaps because I could see my reflection in it: a huge solar panel made of mirrors. It was the largest item in the room, beautifully shiny and looked nothing like any solar panel I’ve ever seen. Granted, the thing isn’t supposed to be mounted on a wall indoors, but I was glad that it had been, and it made me want to stare at it. Big, shiny objects get me every time, especially if I can see my face in them.

There was an interesting interaction between a tour guide and a group of three visitors, an Australian woman around 30 and her parents who left the guided group and walked into the room where I was standing:

Guide: Come back! We’re discussing one of the items you just passed.

Visitor: I think we’ll just deviate for a little bit. (guide leaves the room)

Visitor (whispers to parents, who appear very happy to peruse without the guide): I feel like she has a very different take on some things.

The Jewish Museum was a block away. Upon entering, I underwent a bag check and walked through a metal detector – and received three warm welcomes. The receptionist chit-chatted for a few minutes, where are you from, what are you studying at NYU, have a great time, et cetera.

I wasn’t quite sure what to expect at the Jewish Museum – perhaps religious objects? Also, having been raised in a corner of Ireland, I figured I’d probably feel a little lost there. I was wrong.

The entire first floor of the museum housed South African Photographs by David Goldblatt. This place was busier than Cooper-Hewitt – several over-60′s milled around in small groups, along with a number of female individuals, some of who appeared to be students. Despite this, it seemed calmer than Cooper-Hewitt.

I was interested to see how the Holocaust would be presented. That area of the museum seemed still, and the lighting seemed dimmer. No-one spoke, and I forgot to take notes for a while. The only sound I remember hearing was that of an air-raid siren in a video depicting the 2-minute silence that takes place in Israel on Yom Hashoah (Holocaust Remembrance Day).

Onto the feminist painting exhibition, where I overheard a snippet of a couple’s conversation. The woman was sitting on a bench, looking at an oil painting. The man stood beside her, urging her to hurry on.

Man (pointing to the painting): That looks like what happens when I’m cleaning my brushes.

Woman: I don’t care. It’s pleasing to my eye.

Intrigued by a dark room with a few things glowing inside, I stumbled upon my favorite exhibition of the day: Fish Forms: Lamps by Frank Gehry. It’s a somewhat absurd exhibition of eight lamps, and I loved it. The darkness contrasted beautifully with the carefully moderated lighting in the other rooms, the glowing fish sculptures made me feel all warm and fuzzy inside, and I saw Gehry from a fresh angle that I’d never considered. The plaque on the left-hand-side near the doorway informed visitors how Gehry had come to design these lamps (Formica Corporation asked Gehry to make something with the company’s new laminate product, and he broke a piece and came up with the lamp design). We were then left to ourselves in the intimate, dark space. The visitors around me seemed content to stay a while, looking at each piece. On the way out of the room, a video showing Gehry’s buildings pointed out how fish forms, structure and movement have informed his architecture.

Visiting Cooper-Hewitt and the Jewish Museum was a great way to spend Monday afternoon. I learned a ton. I was surprised that I enjoyed the Jewish Museum as much as I did. I think that Cooper-Hewitt might be interesting for kids and people who know little about design, but I feel that they have some work to do in terms of engaging their visitors a little more, as it felt a little like reading a design magazine at times, flipping from one page to the next. By the end of this class, I hope to figure out a few ways in which museums can better engage and encourage visitor interaction.

Illusion

There’s a certain feeling of discomfort that I get when I stare at something, or in some cases, even think about something, for too long. I spent some time looking through optical illusions the other day and found myself feeling rather queasy by the end of it. At first, I viewed the illusions with a clear mind, seeing the illusions as they were intended to be viewed, allowing them to play with my perception.

Knowing that I was looking at illusions and allowing my mind to play tricks on me made me determined to see things as they truly are, and not as intended. I began to push myself to see the images as they were, to see past the illusion.

After spending a few minutes trying desperately to see past the illusions, I had to take a break from looking at the screen, because illusions felt like the kind of things that could drive me a bit bonkers. How can our brains slip up this easily on optical illusions? I suppose I was a little frustrated – but after looking at several optical illusions, I wondered how we might use such illusions for good, rather than using them to play with people’s minds. For many reasons, it’s uncomfortable to sense that you are being controlled by an external source.

As much as illusions might put me on the edge of my seat, I hope to find some interesting uses for them over the coming weeks.

Dan Ariely’s TED talk suggested that we might want to look a little further into how our minds are wired. It also made me consider how we, as interaction designers, can frame things to assure individuals of their autonomy and free will while they have, in fact, just a couple of choices. Or, maybe, no choice at all?

I.C.U.

I.C.U. is an eye-tracking game that enables the user to explode objects on a screen simply by looking at them.

Here’s how it works: User dons rad, red eye-tracking glasses, stands directly in front of a projection screen and then looks at 12 calibration points that are projected on the screen, one by one, while a brief calibration is performed. Then the game begins. An object, perhaps a banana, appears on the screen, and the user must stare at that object until it explodes into yellow particles that fly all over the screen. Then another object, perhaps a strawberry, appears, and the user shifts his/her gaze to the strawberry, which blows up in a cloud of red strawberry particles. A satisfying “splat!” sound accompanies each explosion.

I.C.U. eye-tracking game from katherine keane on Vimeo.

I.C.U. was created entirely in Processing. I loved the idea of blowing up objects by looking at them – but being a chill, peaceful type of person, I wanted to make the explosions as comical and kid-friendly as possible. Instead of making epic explosions as previously intended, I decided to work again with the Processing Box2D physics library, an open-source physics simulation library written with game designers in mind, to create explosions of colorful particles incorporating weight, velocity, gravity simulation and collision detection.

When the user’s eye rests on an object, the object jiggles, then quickly disappears to be replaced by an explosion of particles. The initial explosive force causes a burst of particles to fly in all directions, then decelerate and fall downwards. I created a boundary on the sides of the screen for the particles to bounce off, creating a confetti effect and then falling to the ground.

My decision to use Box2D was to enable the particles to mount up, allowing the user to make a big fruit salad that filled up the entire screen. I’ve discovered, however, that Box2D probably was not the most efficient way to go in the case of this particular project. The video tracking involved in eye tracking component of the project takes up a huge amount of processing power, and when combined with Box2D the frame rate slows down significantly — from 60 to 15 fps, in some cases — once a few hundred particles are present on the screen. Rewriting the code using a single particle system to create the explosions will likely be a significant improvement.

The eye-tracking component of the project can be challenging and finicky at times, but when the eye-tracking works, it works really well. The eye-tracking Processing code relies on the positions of the pupil and the glint to tell where the user’s eye is focusing on the screen. It’s important to get a good image of the eye, especially for the calibration stage – the LED (only one LED, mind you, or there will be multiple glints, which will not do) needs to be positioned so that it lights up the whole eye, and must be kept to the side rather than getting within view of the camera.

The rad, red eye-tracking glasses are based on the EyeWriter design, the full directions for which can be found on Instructables. Our glasses consist of a hacked PS3 Eye camera fitted with an infrared filter to block out all but IR light, a pair of sunglasses from the ever-wonderful St. Marks Place, some alligator clips, a battery pack and an infrared LED to illuminate the eye. The glasses cost about $50 to build. They’re a little on the large side and tend to slip down my nose (which is a little on the small side), but they’re awesome, uber-nerdy and lots of fun to wear.

The Processing sketch and code can be found here. Feel free to bug me for my eye-tracking glasses, or if you have about $50 and an hour or two to spare, try making your own EyeWriter using the Instructables.

There are many cool possibilities for future eye-tracking projects, and I plan to continue working with it. Once I figure out how to make the eye-tracking system more robust onscreen, I’d like to move off-screen to create a physical moving object that the user can control with their eyes. It’s the next best thing to telekinesis, methinks.

Also, lasers. I want to put lasers on the glasses so the user can have awesome superhero laser vision.

My work with eye-tracking to date led to a few more random observations:

  • Eye-tracking is a good way for me to drive other people bananas by controlling what they can see, based on what I’m looking at.
  • Eye-tracking is a good way for other people to have a laugh at my expense, depending on what’s happening on the part of the screen I’m not looking at.
  • The eye is an inefficient cursor for controlling objects in a larger context, because you can only “see” what you’re looking at.

Below are some pictures from the project. Click on the thumbnails for a larger image.

Detonation, Telekinesis, et cetera

I’m working with Scott Wayne Indiana on a project that enables kids to use their eyes to control objects on a screen.

Out of the many computer vision techniques we’ve looked at in the Hospitable Room class this semester, eye-tracking appealed to us because it can cater to kids with a wide range of physical mobilities, from mild to severe. Eye-tracking also seems like a more direct route to the brain than any other sort of camera tracking we’ve played with, and could likely allow the user to do things that appear impossible to do – even magical.

In preliminary testing/research of eye-tracking, we’ve learned that any interface we develop most likely cannot depend upon the eye as a reliable cursor. Also, when a user is looking at an object, they tend to lose focus of everything else, to the point where that object is pretty much the only thing they can see.

What compelling activities can result from knowing what someone is looking at – or perhaps what they aren’t looking at? Ideas included laser vision, X-ray vision and more, but we kept returning to two things that we can , both of which lend themselves to play:

  • Telekinesis. Moving objects, seemingly with your mind. But really, your eyes would move it. This could be screen-based at first, and then made physical (think magnets under the table, and other “magic” tricks).
  • Blowing Stuff Up. Who doesn’t like to blow things up? Staring at an object until it explodes would be fun. This firework show done in Processing looks pretty cool, and this Pixel Explosion example may be a good way to begin playing with exploding existing images.

The destruction of Alderaan is an epic explosion that could be worth trying to replicate in code:

We’ve received some pretty positive feedback on the idea of blowing stuff up, so we’re exploring that idea first. In my mind, here is how it will work: User sees object, then stares at it. As the user stares at the object, the object begins to shake or grow as an indication that something is about to happen, and the shaking/growth gains momentum the longer the user stares at it. At the max time, the object explodes.

At this point, here are some of the most pressing questions:

  • Would staring at something until it explodes be compelling to kids?
  • What kinds of objects should be blown up? What would kids like to detonate? Should we encourage kids to detonate things?
  • What is the best way to make an epic explosion in Processing? Light effects of some sort might be cool.
  • Blow up an area of a large image, or blow up a separate smaller image?

In terms of hardware, we’re currently using a baseball cap with a camera and IR LED mounted on it. We will build a pair of eye-tracking glasses from the EyeWriter Instructables, which uses a hacked PS3 Eye camera and a pair of sunglasses to build the hardware for about $50.

Nature of Code: String of Beads

I had fun playing around with the Box2D library a few weeks ago, and also enjoyed working with pendulum motion for this sketch. A string of pearls seemed to combine these ideas nicely – so, for the Nature of Code midterm assignment, I set out to recreate the motion and behaviors of a string of pearls.

The “big idea” is a string of pearls suspended from above, swinging from side to side until it collides with an object and snaps in half, allowing the pearls to fall to the ground. Once the pearls hit the ground, they will bounce and roll around on the ground until they eventually come to rest.

The string of beads is comprised of bead objects attached via joints. My first task was to create the appropriate joints to attach the bead objects to one another. I researched several joint possibilities in the Box2D library, and went with the Revolute Joint, which forces each set of 2 beads to share a common anchor point around which the two bodies can rotate.

The working Processing sketch and code can be found here.

My next step will be to make the beads collide with the boundary, snap the joints and make the beads fall to the ground.

[stop the presses] Kids want to have fun

Zeven, Sindy and I headed up to Rusk on Thursday to meet the kids for the first time. We told them a little bit about ITP and the wheelchair camera tracking project. Mostly, though, we were interested in hearing about what they’re looking for in a recreation room, and the kinds of things they like to do.

The childrens’ responses were across the board:

  • Music and singing
  • Art: Collage, sculpting
  • Uno, Candyland
  • Jenga, Uno Stacko, Mancala
  • Sports typically played outside: archery, baseball, football
  • Jigsaw puzzles
  • Searching for items in pictures (iSpy)
  • Cars, driving

With our Google Earth project in mind, Zeven asked the kids how they felt about seeing different places around the world through pictures and video. They were pretty enthusiastic, and suggested trips to places like London, Hawaii, Paris, Hollywood, Disneyworld… oh, and Coney Island!

In our first class meeting a fortnight ago, Marianne told us that kids don’t mince their words: they’ll always tell you exactly what they think. As we continue to learn about designing and coming up with ideas for people, these tough critics will give us a healthy run for our money.

As Zeven, Sindy and I left Rusk that night, we agreed upon one thing that stood out to us: the success of the projects created for Hospitable Room will rely heavily – and perhaps solely (seriously) – on our ability to create something that the kids deem enjoyable. Specifically, we need to make something that is

Which begs the question: what makes an experience “fun,” anyway? What, precisely, does it mean to experience fun? Can fun be created?

Lots of questions. Let the research begin. In the meantime, we’re still working on the Google Earth project. More on that later!

Hospitable Room: Navigating a [small] world

Zeven and I created this Processing sketch as a first step to allow users to navigate through the world. Currently, that world is a small one, comprised of a single view of Dingle, Ireland. Click on the red thumbtack on the map to navigate to that location. Move the mouse left, right, up and down on the photo to zoom in.

This version uses mouse navigation; however, once we get the wheelchair camera tracking figured out a bit further we’ll user-test with the wheelchair.