Melissa and I ran some more tests with lasers and water streams at school and individually at home. Unfortunately we are still fine tuning our alternative light and sensor interface that will trigger the MIDI notes.
This is our first attempt at a MIDI output circuit.
This is us using the stock MIDI code from Arduino’s site. We are running the Arduino through a MIDI jack to a MIDI/USB converter and then into a soft synth.
This is one of our laser tests. I think it’s a cool effect but if we do use it, it will be more for aesthetic purposes.
We met again yesterday to test out our pump and PVC pipes for the fabrication part of the project. I think we are in a good place but we still have to catch up on the deliverables. Luckily I took off next week to focus on getting this completed.
DELIVERABLES (in progress):
Melissa and I decided to work together for the final project. We are working on creating a water instrument. The main interface will be a water fall or water sheet.
I am thinking about the pros and cons or it being a midi controller versus a function generator.
+Post Final Project Concept discussion (11/12/16):
We had a very productive demonstration of our water synth project. Here is a video of how we simulated our concept.
We received a lot of great feedback from our classmates and of course our Instructor Benedetta. After the testing we knew we had a good idea to proceed with and were confident that it would be something we could pursue.
After the demo we met to test our idea. We wanted to have several lasers travel through a waterfall. Unfortunately that is not easy to do and the results were not promising. We were able to get the laser to travel along a stream but it became increasingly difficult to get a projection the higher we went.
I enjoy listening to and playing synthesizers. These days most synthesizers are digital. That’s great because it allows them to be cheaper and reach a greater amount of people while offering a greater flexibility with sound. However there’s a limit to the resolution of digital synthesizers. We also lose a lot from circuits that simulate or model an analog synth. Analog synths are expensive, they’re messy, heavy etc.
1- One idea for a final is to bring analog qualities back into sampled and function generated wave forms. Usually water and synthesizers don’t mix. But if I can run a MIDI controller through a circuit that uses a water height sensor, I think I would be able to add back analog elements into the sounds being played. Also maybe the water can be the expression interface for the instrument itself. I can use other physical qualities of water to manipulate sound and modulation.
2- A second idea I had was to detect position along a RGB plane and have that affect color output among other things. There are already MIDI controllers that allow sound modulation along track pad axis. However this could be an interesting project if there is a unique application for it.
For our PComp midterm everyone in my class was assigned into groups to work on a project. I initially thought it would be a good idea to use a sensor array to map out a color space and have the output reflect where the user was activating. We ran into some issues with the microphone behavior so we decided to re-purpose our configuration for another interaction.
We ended up making the interaction be based on sound and distance. We mapped 3 sensors to a corresponding color value. We had each color go to 3 LEDs in series. Our idea was that we could get the colors to blend if we diffuse them at a distance.
One of the biggest problems we ran into was dealing with the noise of the environment and the circuit. We couldn’t get the sensor readings to zero or even close to it. So we ended up having to deal with low resolution values for our sensor range.
This is the proof of concept. Using one microphone to control the brightness of the LED.
We then expanded the concept to 3 microphone sensors.
We at the same time wanted to work on the serial communication with P5. We ran into some additional issues here. Most of which probably involved us not using a handshake for communication.
I first worked on the Async serial communication lab. This went fairly well. I didn’t use the accelerometer because I haven’t soldered the contacts yet. Instead I used 2 variable resistors as seen in the pic.
I ended up pasting both lines in it. The add file function doesn’t seem to work for me so after a blank file was added, I pasted the contents of the p5.serialport.js inside it.
At some point I was able to find out my serial port name but I can’t seem to open it properly at this point. I am showing the error below. I’ll need to ask for help getting this running.
For my basic application I planned on making an alarm box. When someone opens the box when it’s armed, it will flash lights and make a noise. This would occur until a green button is pressed. A red button will be used for arming the alarm.
I think I would have pulled it off if I designed my circuit beforehand. However I was able to have an alarm triggered and and make lights and speaker cycle on and off. I made a lot of errors with running the wires in the wrong direction or forgetting to power the breadboard.
For the past week I worked on catching up with the lab assignments. This is the button switcher with LEDs in parallel.
The next lab involved a potentiate and LED to simulate variable voltage.
Then I built the Servo variable resistor circuit. That was fun and noisy.
Now the speaker with photoresistor.
The only issue I had was breaking off the lead wire in a Arduino pin socket. Luckily that led to ground. I also got to use a soldering iron to connect the wire leads to my speaker.
Now hopefully I can work on an application of these for tomorrow’s class…
The interactive device I decided to monitor is a parking meter. The physical function of a parking meter is to take your money and return you a talisman. This talisman protects you from receiving cursed objects on your windshield. The cursed objects actually cost more money to get rid of. It behooves you to pay to get a talisman.
A NYC parking meter has one purpose as far as civilians are concerned. That is to allow people to park their motorized vehicles in designated parking areas during the times listed on street signs. I would assume a person selects the amount of time they would like to park and then enter a form of payment to get a receipt to place on your dashboard.
However upon closer inspection, it doesn’t work that way. First I am not sure how many blind people use this machine, but that edge case is not covered here. There is an audio jack that I assume outputs some sort of audio instructions. The language button, which is a different shade of grey than the audio jack, would probably cycle through other common languages in NYC for the audio.
A user walks up to the device and will have to enter there credit card or start inserting $1 or 25 cent coins. I’ve noticed people with credit cards, if they don’t leave the card in the machine for a few seconds it won’t work. This could be a result of the new chips that recent cards feature. However there is no indication to leave the card in the slot until prompted. Some people waited longer than others for it to verify their cards.
After that it seemed pretty intuitive. You increment the time you want (up tot he max) and pay the related price. NYC included a max/time button, which will select the highest time allowed. This actually save you 3-7 clicks depending on the increment scale. After the time is selected you will print the receipt or choose to cancel everything. A downside to this is that you have to pay per half hour, so if you 15 minutes before the free parking period you still pay for the entire half hour.
While it’s nice to accept coins, it’s cumbersome and not useful unless the user planned ahead. I saw someone have to go back to their car to harvest some coins from the car. I would say the user interface is designed well. They used contrasting colors on buttons so you’re less likely to make a mistake pressing one over the other. They also made the Print button big and green.
Image credit: Pureandapplied.com
The quickest task to do is entering coins and pressing the Print button. The longest possible task ignoring the audio function would be to use a credit card and increment to the max. Then press the Print button. There’s usually no line and only one person waiting if there is. Ignoring the guy running to his car for coins, I would say it’s possible to complete the transaction in under 15 seconds. The credit card method is around 20-30 seconds.
After this class’ discussion and exercise, and reading Chris Crawford’s definition and Bret Victor’s rant, how would you define physical interaction?
After reading the authors’ thoughts, I developed a greater appreciation for the nuances involved in designing interactivity. I agree it’s a buzzword that is overused. I also agree that there is a spectrum of interactivity and it is important to have meaningful communication with all ‘actors’ involved.
I feel Crawford’s definition of interaction was mostly accurate. Can you call a rug with a map on it interactive when a plain color rug is not. I can see why it’s easy to mislead people when the term is so loosely used. A guitar tuner is responsive while an automated guitar peg tuner is interactive. But is the peg tuner really interactive? Are you playing with the expectation of being corrected throughout the show? Does this function affect your playing and vice-versa? Is a 3D movie more interactive than a 2D one if at all?
The other side of the coin is that is do we want interaction everywhere? Does everyone need to have a romantic relationship with their toilet. Who wants to hear a greeting when you poop. Do I want the LEDs in the toilet bowl to change when it’s reading my dehydration level? I think as an artist or engineer this should always be in the back of their minds.
I enjoyed reading the rant even though it was a bit exhaustive. The cliche Minority Report navigation system, or Star Trek consoles are really not the future of UI. People are too lazy to raise their hands above their heads let a lone stand up while reading emails. Flat surfaces are unnatural and don’t really exist in nature. I hate when I was forced into ditching my sliding keyboard on my phone for a on-screen version. The technology has improved by leaps and bonds since then, but I still know a flat surface is horrible for typing on. Also, consider how those who have accessibility issues are being further pushed to the sidelines.
How is something flat designed for humans to use? Victor was justified in his frustration and lack of ingenuity in future of interactivity visualizations. We should not try to unify everything into a flat screen like device. Traditional artists use pens, brushes so a successful UI for them comprises of a tactile digital version of a pen/brush. I feel this idea is very important to keep in mind as one designs products for interaction.
What makes for good physical interaction?
A good physical interaction should be forgettable, mundane and a derivative of your everyday life. If you asked me what was the process of entering a subway in Japan. I would remember being outside and then being inside. I don’t want to have thoughts about the work done in the middle of it.
Are there works from others that you would say are good examples of digital technology that are not interactive?
I would say e-books and other e-publications tend to fall on the non-interactive side. Specifically novels tend to be very straightforward as opposed to magazine publications who are more engaging with web links and embedded video etc.