Friday, February 26, 2016

Room Prep for Ambisonics, or the Many Ways To Bake a Pi

In further preparation for an ambisonic performance in the Davis studio, I undertook a fairly exacting survey of the speaker positions in the room in order to use the information to fine tune the ambisonic 'sweet spot'.  This is a short description of my methods and results. 

One thing I have learned implementing this project is how to translate between various 'azimuth' orientations depending on which coordinate system and compass orientation each system uses.  For example, my sensor gives out Euler angles in relation to gravity and magnetic north; on the other hand the ambisonic library's map function uses compass agnostic Cartesian coordinates or Polar coordinates.  The Cartesian coordinates of a sound position have to be extrapolated from the angles sent by the sensor using various trig manipulations.  Or if I choose to go polar coordinates, 0 degrees is to the right side of the graph on the 'x' axis.  This presents confusion when orienting the project because on a magnetic compass 0 degrees is north, which happens to be the 'back' wall of the Davis studio.  Furthermore, the polar system doesn't operate in degrees, but in Radians, so a translation has to be made there as well.  Lastly, with the 'front' of house being due south, the speaker numbering system starts with the center speaker over the top of the projector screen.

For Ambisonics, the Davis studio presents an 'irregular' (though mildly so) loudspeaker arrangement.  Ideally for Ambisonics, a evenly spaced circular array of speakers is ideal, but there are compensations written because this is rarely the case in real world situations.  The authors of the library I am using described their method in this paper from the 2014 ICMC.  Their description of the solution is "...we implement an algorithm that combines Ambisonic decoding and standard panning to offset the missing loudspeakers.  With this technique we can go up to any high order and adapt the decoding to many loudspeaker configurations.  We made tests for stereophonics, quadraphonic, 5.1 and 7.1 loudspeaker systems and other more eclectic configurations at several decomposition orders with good perceptual results."

The practical implementation in the library for irregular speaker arrangements takes the number of speakers and the progressive angle of the speakers and recalcualtes the decoding as described above.  Here is a screen shot of the help file:

You can see to the right of the main objects some umenus for channels and angles.  Also, inside the hoa.2d.decoder~ you can see flags for mode irregular, channels and angles.  I haven't implemented this yet, but it looks from the help file that the array starts at the top of the circle and progresses around clockwise in a compass style, as opposed to a polar radian or cartesian -pi,pi arrangement.  (the optim~ object above the decoder object is also important for irregular arrangements, it's utility is also addressed in the paper.)

So all that is left to do is to find out the angles of the speakers in relation to the room.  I did this first by establishing the very center of the room.  I marked this with masking tape and in the future it may be valuable to actually add a permanent paint spike at the spot.  After that I went around the room with the laser ruler and took the distance from the cone of the speakers to the north and the west walls.  I then subtracted those numbers from the wall to center of room distances to position them in an x-y relationship with the center of the room  I then did the operation arctan(opposite/adjacent) to get the angle measurements of all the speakers in relation to the 0 degree line represented as the line between the center of the room and the center speaker over the screen.  As the room is fairly symmetrical over that line, I assumed the east side of the room to have roughly the same measurements as the west side of the room.  Although it isn't needed for Ambisonics, I also ran the numbers through Pythagoras to get the distance from center of room to the speaker cone for future reference; I believe VBAP likes to have that number.  

Here's the results:

  1. 0 degrees, 15' 2'
  2. 68.7 deg, 15.65'  ( and 7)
  3. 113 deg, 15.6'  (and 6)
  4. 161 deg, 15.1'  (and 5)
   the  Genelecs came out to 24.4 degrees at 14.45 feet, but I don't know if they're going to be included in the array.  I am giving some thought to just using 1, 2, 4, 5 and 7.  If you look at the numbers, they are very evenly oriented in R from center of the room, and their angle is wandering around the 72 degree area, which is the even division of 360 by 5.  That would give me a regular speaker array and eliminate the need for processing for an irregular array. 

I have two goals going forward for this.  With the relationship of the speakers to the center of room now established, I'm going to go around and see, and maybe adjust, the angle of address of the speakers to tune them to the center of the room.  I also intend to double check these measurements and then prepare them into a detailed picture that can be kept on hand for further work and learning in the Davis Studio.

BREAAAK!



Thursday, February 25, 2016

We have a spinning wheel, baby!!!!!!

So I constructed the first wheel.  Learned a lot about steel pipe, galvanized pipe, stainless steel, the art school. Lots of things you can and cannot do in these areas.  But I sucessfully created a base, leg and arm for the spinning wheel.  The bearings I ordered didnt come in yet, but i think its spinning well enough with my new end caps.  So I am going to see how those last.

Next is to see what kind of light data spinning wheels even give a photocell!!!  Hopefully something!!!






Saturday, February 20, 2016

Ambisonics

I've still got a little bit of build to go, but I'm focusing on the sound creation this week.  I think my project can be used in a variety of ways, but one of my biggest interests is the spatialization of sound, recreating 3D sound environments and manipulating them; I think my project lends itself very well to this application.  The nature of the interaction with the Bucky is hemispherical, with the round outer edge easily translating to the horizon and the various motions back and forth from the edge mapping nicely to an overhead dome.  The data that comes from the onboard ITU maps fairly easily to the coordinates of a virtual soundfield as I showed in my previous post.  The next step is figuring out how to implement the virtual soundfield.  I had already had a specific library picked out in MaxMSP, but in a conversation Dr. Gurevich brought up a competing implementation, so I thought I should take a look at the two for the sake of thoroughness.

The method I was planning on using is one that I have used before, but only in a binaural (headphone) implementation; that is the method known as Ambisonics.  The other method is a slightly newer system called Vector Based Amplitude Panning (VBAP).  They both have their strengths and weaknesses and I'll try and survey a few of them here.  The VBAP system was developed by Ville Pulkki who I believe is based in Helsinki.  Through my very cursory research VBAP seems like an approach based on the traditional stereo field panning concept, where the ratio of the intensity of sound coming out of two speakers gives the perception that the sound is actually somewhere in between the two speakers.  VBAP is a system that allows you to extend this system out to any number of loudspeakers.  Using vector based mathematics to set the relative position of the sound and the combining it in a matrix with the loudspeaker position and distances, a convincing, easy to manipulate virtual soundfield can be created.  Like the stereo system, it uses ratios of intensity to position the sound, but instead of a stereo pair, it uses stereo triplets, with loudspeakers above the plane, to give elevation information.  It also smooths the transference of sound from one set of triplets to the adjacent triplets in order to involve the full range of motion.

Ambisonics, on the other hand, is much less intuitive 'under the hood' but instead uses some very cool psychoacoustics that I just barely understand to achieve the spatialization effect.  It's an expansion of Alan Blumlein's invention, the Mid-Side microphone technique.  Instead of the VBAP technique of actual localization of the sound in a triplet of loudspeakers, the position information is encoded in and emanates from ALL of the loudspeakers through a system of phase cancellation and correlation (I think...even after reading it a hundred times it still seems like magic to me.  Probably why I'm a sound lover, this stuff just fascinates me).  Ambisonics necessitates encoding and decoding stages on either side of the position determination, which can be processor intensive; this is one of the reasons that 5.1 Surround has surpassed Ambisonics for surround sound in consumer electronics.  Up until recently the processing power needed put the decoder price point way out of range for anyone but enthusiasts. 

The two systems, as I understand it, produce a relatively similar result, so the differentiation is in the implementation.  The usage of one or the other may be dependent on the situation, one better for permanent installation and the other for undetermined performance spaces.  I don't have enough experience to make that call. The VBAP system necessitates entering speaker position and distance from the 'listener', and so is a little more difficult in initial setup.  Ambisonics, as far as I can tell, is more speaker position 'agnostic' (at least within reason) making setup much easier and allowing for a variety of performance settings.  The problem with Ambisonics is that it has a much tighter 'sweet spot' (though I understand that it is getting much wider as decoding speeds up and more thorough HRTFs are implemented) and there are certain perceived phasing artifacts if the listener moves their head to quickly in the sweet spot.

There are libraries for both methods readily available for MaxMSP.  On the VBAP side, the library is written by the methods inventor, Pulkki, and he has a paper on the implementation here.  On the Ambisonic side, there are a few libraries out there.  This page at Cycling 74 has a couple of the proven ones, including the High Order Ambisonics (HOA) library from CICM which I have used before.

I think I'm probably still going to go with my initial instinct and use Ambisonics.  I would like to give a well thought out reason for this, but it's mostly based on my feeling that Ambisonics is just to cool of a psychoacoustic phenomenon not to play with.  Also, the HOA library implementation is very advanced, including objects that aid in the connecting of the thousand patch lines inherent in spatialization.


Friday, February 19, 2016

Max Patch for Water Glass

I've spent this week figuring out how to get the serial data from the arduino side of water glass to communicate with a Max Patch where I can map audio effects onto different gestures.  I found a few versions of a Max Patch that could communicate with the touche sensor online, most helpfully from madlabk, but it took quite a few hours of debugging to work through his code and adapt it to my purposes.  Now I have a working max patch that has a training algorithm to recognize gestures, with a debouncing function!


I realized there won't be a way train the patch to recognize gestures away from the stage of the performance, because the sensor can't move too much between the backstage and where I will perform or else the readings it stores will not be valid.  Therefore I will have to train the gestures on stage, akin to how a string player tunes on stage.  I'm thinking now about how to incorporate this process, and my interaction with the laptop in general, into the performative arc of the piece, which may be a useful question for everyone else!

Monday, February 15, 2016

Monochord / Instillation Idea?


At the beginning of the semester, I was looking across new instruments (new to me) and came across the monochord. Although smaller monochords do exist, I am most intrigued by the large ones.

The beauty of the monochord is how easy it is play. This goes against what I'm trying to build as an instrument for my current project, but has me already thinking ahead to how I could apply a certain aspect of this to some kind of instillation. Though I do not necessarily want to work with the idea of strings (even though that is a possibility), I would really like to build something like this: Its function is apparent, its easy to play, and the simple drone-like sound is extremely soothing. I think bringing those aspects to an interactive instillation would be extremely interesting.

Saturday, February 13, 2016

Math



So, I use math like an untrained songwriter uses music theory.  I've got a basic feel of how it goes, I have a general feeling of how it works, and I just keep on trying until something pops out that makes sense.  That's how my research this week went.  At first I was going about it very officially.  The sensor I'm using outputs XYZ data in various forms, but the one that seemed to make the most sense was the Euler angles.  I researched Euler, pronounced 'oiler' as in Edmonton ( a nod to our Canadian colleagues, who in the PAT department are legion), who I'd never heard of, but is to Mathematicians what Charlie Parker is to sax players, or what Claude Shannon is to tech folks.  I tried to understand how to calculate coordinates on a sphere, I reread all my trig, I poured over math blogs.  No luck, I just couldn't make the connection.  I ended up spending a lot of time using my hand to represent the planes and thinking about it over and over again.  As I played with the Bucky a little, looking at the data on the screen, a feeling (not an understanding) started to come to me; there was a correlation and I just had to find out what it was.  My data on the x plane was tied to magnetic north, so I wasn't interested in that because it needs to work no matter what direction it's pointed when someone picks it up.  But the Y and Z planes were both in the range of -40 to 40.  I decided to scale them down to a range of -1 to 1 and have a look.  I realized that they were reporting in Cartesian numbers, even though they were angles; it must be some derivative that I'm not aware of.  So, if you hold out your hand, the y axis stretches from pinky to thumb and the z goes from fingertips to wrist.  The combination of the two creates a vector from origin, which is just what I needed to establish a position for the data! Way more simple that coming up with some trig monstrosity that ate cycles and scramble brain cells.  So here is a video of the control I was able to find:



The 'map' you see on the screen is actually a sound field map  of an ambisonic encoder, so I've made my first step in having a Ambisonic Controller.  Cheers!


Monday, February 8, 2016

Polyphonic Arduino Tone Generation With Mozzi Library

Last semester I took Dance Related Arts and had the opportunity to build three instruments that were incorporated as stage pieces in our performance. The theme for my group was to explore the impact of internet surveillance and the increasing presence of social media and technology in our lives.  One of the instruments I built was an array of 5 photocell resistors that each generated their own tone due their respective brightness. In order for the instrument to live on stage I had to either develop a bluetooth/wifi system for sending sensor data, or figure out a way to achieve sound synthesis/amplification directly on the Uno board. I chose the second. Here are a couple clips:






In order to synthesize multiple tones and have advanced control over their parameters, I used a library called Mozzi. Mozzi opens the Arduino environment up to things like multiple oscillators, envelopes, filtering, delay and reverb. It supports sensors like piezos, photocells and FSRs right out of the box and can be easily modified for any other sensor/trigger mechanism. 

My final design was an array of 5 photocells hooked into an Uno that was amplified via a 1/8inch connection to a JamBox. 


Here is the library : http://sensorium.github.io/Mozzi/

Friday, February 5, 2016

Looks Great On Paper!

As so often happens when you close the gap between idea and reality, it's the little banal details that start to turn into big problems; molehills become mountains.  The current design I'm working on is no different.  That being said, I've never had a prototype go as smoothly from conception to an object in hand as I have this time; chalk that up to the printing technology which proved its main claim to fame for fast prototyping.  So, without further ado...



So first, the good things.  It's great to have the object in my hand, to be able to play with it and feel it's strengths and weaknesses.  The interaction feels much as I imagined it, and the scale to the human hand is just about right.  The assembly was fairly straightforward and how I imagined it...no 'oops I didn't think of that.'  Also, I was able to get it into the hands of my design focus group (my wife) for immediate evaluation and feedback.  For a first iteration, I'm very satisfied.  Emphasis on first iteration.

The bad things. Because of the nature of the 3d printing process (I still prefer the word Stereo-lithography, but for the sake of clarity I'll go the more prosaic route), both the housing and the plunger are covered with tiny, horizontal ribs.  This makes the action in and out rough, though I imagine it will get better with time as use smooths out the inside.  I could have also spent a few hours sanding it, but I decided that I wanted to get it assembled and out for a test drive before I committed to putting that time in.  I used elastic cord for the compression elements, and I think I would rather use metal springs as the cord is just not strong enough to provide a satisfying resistance.  I'm also a little concerned that the plunge depth is not deep enough to have a wide range of sensor data that will be needed for maximum expressiveness.  The solution of course is to make the housing cylinder longer, but then the device starts to move out of the 'hand sized' scale that I was trying to adhere to.  I also think the ring could be a little larger, so in the next iteration I'm going to have to strike a balance between those two opposing concerns.  The results of the focus group testing proved inconclusive.  When she first interacted with it she didn't roll it around on the ring like I envisioned.  I thought the design just begged for that motion, but apparently it didn't.  All is not lost, it may become more apparent when there is feedback (sound) hooked up to the interface and the movement is tied to that.

Moving over to the electronics department...




The i2c bus works like a charm!  I've wanted to try out sensors using i2c for a while, and this was a perfect opportunity.  The i2c bus is an old but proven technology that uses a data line and a clock line (SDA and SCL) to move information through your circuit.  So instead of running a bunch of wires back to the microcontroller for each sensor reading, you only send these two wires.  On each sensor, or slave, device there is a small addressing microcontroller, and the i2c protocol sorts all of it out for smooth sensor data from multiple devices over minimal material.  The i2c works by varying the length of data pulses around the steady clock pulses.  Found a informative graphic on Sparkfun:



 Adafruit supplied the chips and as usual a super friendly library and scads of documentation.  It turns out that they supply so many different sensors and upgrades to their old sensors that they have come up with a meta-library to handle reading the sensors for all of their products.  It's called the Unified Sensor Library and it makes getting data as easy as calling a member function of a declared object.

Now that I've got the data, I've got to figure out what to do with it.  XYZ orientation data is the obvious choice, and I don't really even need the X data, which would tell me which way the object is turned from magnetic north.  I'll end up mapping the Y and Z data to the unit circle on a polar as opposed to cartesian graph, and then adjusting the scale of the graph based on the depth of the plunger in order to compensate for different readings for different elevations of the plunger.  Or something like that, my math is a little rusty in this area so I'm just going on hunches at this point.  But it looks good on paper!

Wednesday, February 3, 2016

Sound Inspiration

I believe it was Spencer who brought up http://youarelistening.to/chicago in John Granzow's Performance Systems class. On the site, you have the opportunity to listen to live police scanner feeds from various cities while an ambient music playlist plays in conjunction with the police scanners.

I think there's a nice juxtaposition between the very calm, ambient music and the crackles of the police scanner. In class we delved into how there's an interesting philosophical element with the violence/chaos usually associated with police scanners and how it can almost become just background noise, particularly when played with the ambient playlist.

I could delve into more philosophical detail, but I think I'll save that for another time!

Anyway, I like that kind of juxtaposition in sound, with the smooth ambient background being...interrupted? by the police crackles, and it's something I'm taking into consideration with my own sound design and sound space.

Tuesday, February 2, 2016

Dulcimer influences on a gurdysian design

Through continued sketching of the Gurdysian Manipulator, I found myself drawing inspiration from Appalachian dulcimers for the left hand control layout.

Like a hurdy gurdy, the Appalachian dulcimer is modally keyed. Playing a melody on a dulcimer looks (and often sounds) similar to playing a melody on a hurdy gurdy: one reaches over the body/neck to play monophonic lines while the other hand excites all the strings (...usually). This design construct could be useful in the case of a gurdy inspired digital instrument.

I'm considering using 3 linear soft potentiometers (+ linear FSRs?) in a string-like configuration to set boundaries for looped audio material. This could allow for one hand control of a loop's beginning and end as well as a "set" function. Perhaps these could also be used to control effects if I design it to be multimodal.

I'll post a sketch here within the week. If this works, I think I'll change the name from Gurdysian Manupulator to Gurdulcimator.

3D Printing an Arduino Mount

I've had the opportunity in PAT 461 to 3D print some objects for my first instrument. I printed a Arduino Uno mounting bracket and a table. Unfortunately the table I made wasn't a full solid and skipped one of the legs.

Here are some images of the two parts:


The Arduino Uno mounting bracket was made by a user on SketchUp's 3D Warehouse community and fits the board well. I've enjoyed making models and being able to confirm that an Arduino board will be able to integrate with it. Prototyping instrument designs in CAD is fast and now a viable option for developing Arduino based devices.

STL file for bracket here: https://umich.box.com/arduinounostl