Showing posts with label Soundscape. Show all posts
Showing posts with label Soundscape. Show all posts

Tuesday, April 21, 2020

#OUTNOW: Intra Muros

I'm pleased to announce the release of "Intra Muros", a new ambient album I composed with Enrico Ascoli:
 
Image
Check it out here: linktr.ee/matteomilani

Intra Muros is a project born during the quarantine nights in March 2020, via a remote collaboration (Milan> Turin). 
The eleven sound-objects, improvised day by day in a sort of creative ping pong between the minds of Matteo and Enrico, overcome the physical barriers of their confinement and recompose themselves the form of apparently stable, but remarkably articulated sound sculptures. 
During the quarantine the apartments have become simultaneously protective shells and static prisons that vibrate, perturbed by ambivalent feelings of calm and tension. 
The tracks symbolize these movements of the souls in constant balance between fever pitch and research of a new interior peace, giving life to an immersive and iridescent sound textures. 

Composed & Mixed by Matteo Milani & Enrico Ascoli 
Mastered by Matteo Milani

Friday, September 27, 2019

OUT NOW: Kyma Ambiences vol.2

Image

Kyma Ambiences vol.2 contains 100 abstract and evolving ambiences created with our beloved Kyma sound design workstation.
Due to the increasing demand of spatialized content for game and VR/AR project, this inspiring collection has been entirely designed as a spherical representation of sound natively in 3rd Order Ambisonics by generating all the Sounds in Kyma and mastering them in Pro Tools through a custom analog setup.



Create a truly immersive and dystopian atmosphere with the 2nd volume of the classic Kyma Ambiences sound effects library.

FIND OUT MORE

Friday, November 06, 2015

GRM pt.3: What made Syter original

(Continued from pt.2)

an excerpt from the booklet about the Syter system at INA - GRM | Archives GRM (CD 4) - by Daniel Teruggi

Looking back on it now, it is not easy to describe just how original the tool was for the time. It was at this time that the first "black boxes" were beginning to appear in the shops, at very high prices, enabling users to do a limited amount of processing to sounds. It was impossible to programme these devices. They had a certain number of parameters which were determined in advance and could be controlled using buttons and potentiometers to simulate analogue devices. At the same time, the first samplers were appearing, enabling users to record a sound just a few seconds long into memory and then to replay it, by transposing it and modifying certain parameters.
Syter was all of that and much more besides: processing and synthesis tools, rapid memories, the possibility of reading and recording sounds in real time on a hard disk (500 Mb, which was considered to be absolutely fabulous at the time) and above all, the possibility of reprogramming the processing tools and building new ones to your heart's desire, using a modular programming approach. Syter was the potential book of magic on the basis of which all the existing processing and synthesis methods could be rethought and new processes imagined and designed. All of this had a cost, and the price of the system was such that only one institution was able to buy it (although it was only about 10 times the cost of a synthesiser or a digital processing box at the time), and it required maintenance engineers to keep it running.
The originality came from the fact that processing methods that had come from studio work, and which had been used from the outset for GRM concrete music, were made readily available, without the need to learn programming languages or to have an assistant constantly on hand. In other words, the real originality was to be found in the algorithms and the interfaces.
Concrete music and the use of electroacoustic studios had stabilised and modelled a certain number of sound-related operations on the basis of perception-based concepts. For example, an extremely powerful analogue studio process, "micro-editing", involved cutting minute fragments of sound from magnetic tape (using scissors!), which were then stuck end to end to create a new continuity. This principle was very successfully applied by the deferred time software and by Syter, making it possible to reorganise the material into new coherent sequences. This became known as "brewing". But brewing is not the end of the story, because the difficulty lies in controlling the way the brew comes together. Graphical interfaces, which these days are at the very heart of all computer technology, but which at the time were practically unheard of, were used to visualise the sound and the control parameters, and there was even an interpolation screen for exploring the intermediary terrain between two processing states.
Syter was a hit with musicians, both for studio work and instrumental work. In the studio, it could be easily built into the existing environment and breathed new life into the palette of processing possibilities. The system was essentially used for the processing of sound, meaning that the composer would record sounds and then modify them using the processing tools that were already built in, or by creating his own tools. In so doing, he would be faithful to the GRM tradition of processed sound, even though many hybrid processing techniques (between recorded sounds and synthesised sounds) provided entirely new kinds of sound. This material would then become (whether or not mixed with other sounds from other sources) the basis on which the composer would build his work.
Furthermore, at the time there was a unique relationship between composers and technical designers, who thanks to the modular programming techniques and their user-friendliness, could quickly build the tools necessary for creative work. A number of models that were later to become GRM Tools were a result ot this experimental relationship (in particular Doppler and Pitch Accum]. Once they had been built up, these algorithms were simple to implement, and integrated the whole palette of processing tools available in the system (around 40 different algorithms were designed and 15O variants of these basic algorithms).

An approach founded in pedagogy

The philosophy of the GRM has always been that the creator should work independently on his own process of composition, without the assistance of anybody else. Most composers had the training necessary to handle the techniques, to understand and work the analogue studio, and only in very rare cases were they assisted by the technician-musician. There was such a great interest in the deterred time software or the Syter system, and it was aimed at musicians of such varied backgrounds, that a training programme had to be set up in order to help them come to grips with the different systems. Man of these composers who came from an electroacoustic background, and many others were not familiar with studio techniques but who wished to become acquainted with them and develop projects bringing together instrumental and electroacoustic techniques. There were many other professionals from other fields: artists, radio and sound technicians, teachers or musicologists.
Week-long courses with small groups of trainees began to be organised 2 or 3 times a year, involving generally 6 to 8 participants (a total of 20 courses between 1985 and 1993]. During these courses, the system was explained and the participants had the chance to experiment and play with sounds. The objectives of these courses were manifold: the first was to provide composers with the training necessary for them to be autonomous in their work and to enable them to develop a project.
Another objective was to test the system with users. Because it was such an innovative system, using original approaches with regard to algorithms and interfaces, it had to demonstrate that it was up to the task and that the composers could use it easily and efficiently. Around 120 people followed these courses, and 80 works were composed, sometimes several of which were written by the same composer.

From pedagogy to production and concert presentation

Many of the composers were attracted by the possibilities offered in terms of the real time processing of acoustic sounds, and embarked on projects that brought together live instrumentalists, real time processing and recorded sounds. Others used the system in the studio, for acousmatic works, either to complement other existing studio technologies and tools, or sometimes as the sole production tool.
I was personally involved in this pedagogical and production aspect of the Syter system for some ten years. When it was first presented in-house in 1984, everybody underlined the technical prowess it had been to develop a system of that kind, but there was little enthusiasm on the part of the GRM composers, in light of the small number of existing algorithms and the fact that there were no instructions for use. I was fascinated by this approach and I proposed to Jean-François Allouis that I would help him in his project, in particular by explaining to composers how the system worked and by writing up a manual. We then organised the first training sessions in August 1985 and August 1986, and thereafter I took charge of the courses and production associated with the system and the development of variants of the instruments, in response to requests made by composers. I was therefore able to meet everybody who participated in the courses and I followed everything that was produced using Syter. I also played a great many works that involved Syter for the real time processing of instrumental sound Ia task that we became particularly involved in with Richard Bulski, the system technician, especially for moving it and setting it up for concerts).
I was able to gain an extensive and in-depth knowledge of how the system functioned, so much so that I was able to write my PhD dissertation on Syter (The Syter system, its history, development, musical production and implication in contemporary electroacoustic language, presented in December 1998 at the University of Paris VIII). I composed ten pieces on the system, some of which were with instruments, using the system only to produce electroacoustic sound, and others which were acousmatic, where a great deal of the sound creation work was done on Syter from start to finish. I began to move away from the system in around 1993, when it was beginning to become obsolete and when the first versions of GRM Tools were becoming available on Macintosh, designed and built by Hugues Vinet, who took much of his inspiration from the algorithms of Syter. I also realised, in 1993, that my life had been too wrapped up in the system, when a composer asked me seriously whether Syter was an acronym for System Teruggi!

inagrm.com/grmtools

[Prev | 1 | 2 ]

Sunday, May 26, 2013

Out now: Kyma Ambiences - vol.1 [USO003]

 
Kyma Ambiences [USO003] is the third sound effects bundle created by Matteo Milani (U.S.O. Project). The generation of these "Artificial Reality Ambiences" starts entirely in Symbolic Sound Kyma - during the development of projectgenesismovie.com - from the processing of white and pink noise by filtering - in the time/spectral domains - and convolving these sources with custom FM, additive, formant and granular synthesis.

The composition’s resource of sound material is drawn solely from these processed stochastic sound elements: coloured noise is a raw material already full of life and can be sculpted into a variety of temporal forms, movements and textures. The interaction with Kyma was typical of a composer who explored a device’s potential for sound transformation like a musical instrument. The goal was to obtain an organic and acoustic quality using only a restricted sound source, in order to evoke real spatial characteristics attached to each invented sound.

The sound effects collection is published @ 96kHz (native), plus a budget version @ 48kHz (resampled). “Kyma Ambiences vol.1” is not only available in these two packages, but also as dual-layer separated “Elements”, suitable for recombinant stratification, varispeed and spatial positioning in the surround field (for a total of 112 files @ 96kHz).

Here is what you get in “Kyma Ambiences vol.1”:
  • Stereo Interleaved Files (56 items, duration 120s each)
  • Comma-separated values file (.csv)
  • Excel spreadsheet (.xls)
  • License Agreement (.pdf)
  • Artwork (.png)

Three flavours:     

48 kHz (small): $ 49 - via PayPal  

Audio Format: Broadcast Wave Files (.wav)
Bit Depth: 24-bit  Size: 1.97 GB
Download size is 1.50 GB (compressed .rar archive)

96 kHz (medium): $ 79 - via PayPal 

Audio Format: Broadcast Wave Files (.wav)
Bit Depth: 24-bit  Size: 3.93 GB
Download size is 2.78 GB (compressed .rar archive)

Elements (large): $ 99 - via PayPal    

Audio Format: Broadcast Wave Files (.wav)
Sample Rate: 96 kHz
Bit Depth: 24-bit  Size: 7.86 GB
Download size is 4.99 GB (compressed .rar archive)

Available on www.unidentifiedsoundobject.com

Wednesday, October 10, 2012

Augmented Listening

By Tue Haste Andersen - October 9, 2012

reBlogged from: design mind
 
Image

Stop for a second and listen. Close your eyes, use your ears, and just listen.

Whether you are in a quiet office environment or out on a busy street, you'll be amazed by how many sounds there are around you. Most of us do not pay attention to the ambient sounds that surround us. Our brains filter them out and we don't listen. Yet the sounds we miss can be very enjoyable.

Designed Sounds

Today, what we hear in our daily lives is often designed sound- music and sound effects carefully crafted for games, devices, and products. For example, mission-critical products, such as heart rate monitors used during medical surgery or a plane’s flight deck controls, use distinctive alarming sounds that are designed to be easy to perceive and raise a sense of urgency or danger.
In interfaces for everyday tasks, sound is used to create engaging and beautiful experiences. Sounds can generate a special feeling or underline brand identity while simultaneously providing cues that a command has been received by the system. Most smart phones today come with subtle sounds that indicate the pressing of a touch screen’s virtual buttons. Since there is no way to feel if a virtual button has been pressed, the sounds reinforce the action for the user. Another example can be found in industrial design, where the latest electric cars are being designed with artificial motor sounds. The sounds alert pedestrians to the car as well as reinforce the sense of driving a powerful vehicle. These examples underline the overall trend of sound being used to create an aesthetic experience rather than serving as purely a functional aid to improve interaction.

Image


Blurring the Border Between Listening and Composition

While systems and products are becoming more enjoyable and pleasant to listen to, they are usually not intentionally designed for sound interaction. The emergence of accessible music software on computers and mobile devices is changing this. These programs allow for easy modification of sound by the average user and blur the border between listening and sound creation. The small form and limited complexity of mobile interfaces has forced music software designers to reduce the complexity of their products, resulting in music software that is widely used by average mobile phone users.
Music apps are often top sellers. Popular applications allow people to become mobile DJs, to transform sounds, and to design ringtones.
I was interested in exploring the blur between sound creation and listening when my friend and colleague Matteo Penzo put me in contact with Matteo Milani from the U.S.O. Project sound art group. The ideas and compositions of the U.S.O. Project revolve around the use of noise and ambient sound as a foundation for sound installations and music composition. Together we wanted to create a mobile experience that would support active listening to the everyday sounds that surround us, making the listener a part of a personal sound installation. Instead of creating a tool for recording and transforming sound, we wanted to start from the sounds themselves. Our goal was to reinforce the sounds of the listener’s environment while blending them with more musical sounds. Together the sounds would form a unique experience that could be enjoyed by anybody that has an interest in sound and art.  

Image


Early Experiments

We started with a small prototype app for iOS using simple sound algorithms to blend U.S.O. music with live recording from the iPhone microphone. The prototype was tested with real use cases that included listening to the app while taking a long walk as well as while sitting at the computer in the office. We added many parameters for the user to be able to tweak and play with the sound transformation.The parameters were mapped to on-screen sliders and buttons and to sensors like the accelerometer.
While doing the informal tests we found that the users were struggling to understand the relationship between the parameters and the sound output. Also, in most cases they would end up spending time experimenting with the parameters to discover how they work. The visual interface and controls were clearly distracting, taking attention away from the app’s original goal of reinforcing ambient sounds for the listener.  
Following these early experiments, we decided to take a drastically different approach. We limited the visual interface as much as possible and provided a set of sound themes in the app for the listener to select. This worked much better. All of a sudden the users would pick up the app and, once started, would tuck it away in a pocket while listening to the sounds. Each theme takes sounds from the microphone and blends them with sounds composed by U.S.O. Project. The sounds are blended using sound algorithms, unique to each theme. Each algorithm is carefully calibrated to replicate the work and skill that goes into producing a great listening experience.

Lis10er

The result is Lis10er (pronounced Listener), an augmented sound installation app. Sounds are blended from the listener’s surroundings, creating dynamic music that changes while maintaining its identity. Lis10er provides users with a creative way of listening to their environment and a unique experience with every listen. 


Tue Haste Andersen is Senior Software Architect based in frog’s Milan studio. Tue is a Human Computer Interaction and Computer Music expert, with research ranging from DJ work practices to the use of sound and music in common interaction tasks. He is also the founder and original author of the popular open source DJ software, Mixxx.

Monday, September 17, 2012

Mirror_Mirror | Sound Installation

Image

Concept and software design by Federico Placidi
Hardware design by Matteo Milani
Woodworker: Fabio Testa
Produced by U.S.O.Project, 2012


Where: [.BOX] Videoart project space, Via Federico Confalonieri 11, Milan
When: Thursday, September 27, 2012 | 6:30 until 21:30 PM
Free admission

The concept of multiverse was first introduced in the so-called “many-worlds interpretation” (MWI) of quantum mechanics by Hugh Everett III in his PHD thesis, "The Theory of the Universal Wavefunction". His model was thought as an alternative to the renowned theory called “Copenaghen interpretation”, developed by Niels Bohr and Werner Heisenberg.

The MWI interpretation postulates that every quantum measurement process (at Planck’s scale) creates, as a consequence, a division of the observed Universe into multiple parallel universes - as many as the possible outcomes of the measurements are.

In different formulations of this concept, all the universes - which form the Multiverse - are structurally identical, and they can coexist at different states even if they possess the same physical laws and fundamental constants.

We need to take into account that those universes are non-communicating (there cannot be any information exchange between them).

In an episode of the famous TV series “Doctor Who” - the episode was Doomsday, written by Russell T. Davies - the Doctor finds himself in the situation to make a difficult and dramatic choice: separating from the person who probably loved him the most, by “exiling” her in a parallel universe to guarantee her safety and survival.

It is worth mentioning some parts of the dialogue from the original script of the episode:


Rose comes to a halt in the middle of the beach and stands there, waiting. A short way to her left, the Doctor fades out of thin air. Rose turns to him. He's slightly translucent.

ROSE

Where are you?

THE DOCTOR


(his voice sounds distant)

Inside the TARDIS.

INT. TARDIS

The Doctor is, in reality, standing by the TARDIS console facing straight ahead.

THE DOCTOR (CONT'D)


There's one tiny little gap in the universe left, just about to close. And it takes a lot of power to send this projection, I'm in orbit around a super nova.

(laughs softly)

I'm burning up a sun just to say goodbye.

Sure enough, the TARDIS is spinning around a beautiful super nova.

EXT. BAD WOLF BAY


ROSE


(shaking her head)

You look like a ghost.

THE DOCTOR

Hold on...

He takes his sonic screwdriver out of his pocket.

INT. TARDIS

He points the sonic screwdriver at the console and somehow this strengthens his projection.

EXT. BAD WOLF BAY


The Doctor now looks as solid as if he were really there. Rose walks over to him and raises a hand to touch his face.

ROSE

Can I t--?

THE DOCTOR


(regretfully)

I'm still just an image. No touch.

ROSE

(voice trembling)

Can't you come through properly?

THE DOCTOR

The whole thing would fracture. Two universes would collapse.

The scene partially violates the no-information-transit prohibition between the two universes, but the subterfuge used in the story (the two characters cannot touch, only see each other), somehow preserves the assumption presented by the MWI, only conceding a small but necessary poetic licence.

In the same episode, there is a very touching scene, where the two characters (Rose and the Doctor), right after their isolation in two different universes, are in front of one another separated only by a simple white wall.

This border, imaginary and symbolic, which divides not only two places in the same space-time continuum, rather two whole universes, gave us an idea.

We wanted to offer that experience as an installation. We wanted to allow the audience to listen to whatever is “on the other side”, beyond that wall, and we wanted all this to happen in real time.

That’s how Mirror_Mirror was born.

The installation is organized inside a space.

What does this mean?

The space in itself, when it isn’t filled with matter (empty), is in reality permeated by low energy levels continually fluctuating.

These fluctuations in the void, have a particular significance on a quantistic level (we refer to the Planck scale, so at infinitesimally small dimensions).

In quantum mechanics, these fluctuations represent temporary shifts in the energy state of the void space, according to the Heisenberg Uncertainty Principle.

This means that the Conservation of energy principle can be violated for very brief periods of time (the lower the energy, the longer the fluctuation can persist.)

This energy can decade and take the shape of pairs of particles and antiparticles (which then annihilate each other).

In substance, the amount of average energy on a larger scale remains constant. Nothing is created, nothing is destroyed.

“There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in the manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.” - R.Feynman

It was in our interest to draw an analogy in the sound domain, allowing the audience to directly experience this phenomenon.

As a consequence, we created an application able to “sonify” ideal energetic fluctuations (generating pressure waves, structures and emerging behaviours), starting from the lower energy level available, which is the background noise.

Thanks to the Kyma software implementation, and the use of microphones, it was possible to “measure” background noise and , through a series of negative feedback operations, enable the instrument to create “something”, using all the information available in our space/universe to create temporary energetic fluctuations (statistic variations of the density of sonic quantums’ “packets”), without violating the Conservation of energy principle - in fact, the average energy quantity, altogether, remains the same.

We’ve thus arrived to the gravitational center (here we have no more fluctuations, only numerous certainties, due to the dimensions and mass of the object) of the opera, which is represented by a wooden artifact, symbolically revisiting the white wall we came across in Doctor Who, which divides our universe from another possible universe.

Which one it is, the visitor will have to find out for himself.

By placing a stethoscope on the wooden surface. the observer will be able to “measure”, with various levels of definition, the sounds coming from another probable universe out of phase with ours.

In fact, as identical as it will seem, it strangely does not share the same temporal parameter.

From quantum mechanic and its Multi-Worlds interpretation, we know that every measurement operation will produce a further division of the universe.

So, it is possible that in the end, there will be as many universes as the present observers.

And, paying a little attention, it will be possible once again to listen to the voices of Rose and the Doctor, as if they are suspended in a temporal loop, to remind us that maybe, the current physics laws, could not be the same in every place and every time.

Federico Placidi, Matteo Milani
>>U.S.O.Project

Sunday, September 09, 2012

New Album 'Unsorted Tales' Just Released!

Image

Composed by U.S.O. Project - Unidentified Sound Object
SYN-007 © 2012 Synesthesia Recordings
Mastered @ Green Movie, Milan (Italy) by Matteo Milani & Federico Placidi

Performers:
Edoardo Carlo Natoli - Violin
Federico Placidi - Violoncello & Flute

TRACKLIST:

1) F'Shima (13.49)
2) Gretel's New Clothes (19.46)
3) Psalm 21 (22.15)

RELEASE INFO:

Artist: U.S.O. Project - Unidentified Sound Object
Title: Unsorted Tales Cat.No: SYN-007
File under: Experimental/Electronic
Format: Digital
Release date: 9.2012


F'Shima 



It's night.
There are no precise coordinates, nor recognizable signs that can lead us back to a familiar place.
And yet those sounds remind us, almost pathetically, of a summer night scene, to which we listened countless times.
But that’s not what this is.
The slow and articulated flow of pulsating energy gradually takes a structure, enabling us to be part of something very different.
The flow surrounds us, passes through us.
We can’t see it, nor touch it, but every particle interferes with our body, with our biological substrate. Far away, a storm is coming, perhaps.
It’s neither natural, nor inoffensive. It deceives us with the variety of its harmony of timbres.
The structure suddenly changes, reorganizing itself while turning us into something new.
A breach made of light. It’s oscillating.
Then, silence.
The air stagnates, suspended and motionless.
An electrical impulse.
One more. Others follow, one after another, fainter and more distant.
(Sounds), like petals made of ash.

F'Shima was designed and built around a series of recordings of electromagnetic fields generated by various electronic devices (Hard Drives, iPad, iPhone, Portable Game Consoles ...), recordings made ​​by using old analog phone-captors. The source material, sometimes fully recognizable, sometimes radically altered, has been digitally processed using the Kyma Sound Design environment.


Gretel's New Clothes 



"Nibble, nibble, gnaw,
Who is nibbling at my little house?"
The children answered: "The wind, the wind,
The heaven-born wind."

Gretel's New Clothes is, in a way, an acousmatic reinvention of the famous fairytale by the Brothers Grimm. It is in the form of a Rondò. 
There is no attempt to literally retrace the story, rather to suggest it in a subtle way, through the use of strongly characterized sound materials (footsteps in the woods, the wind, the "sounds" of the night...) which are then dialogically processed during the final Violin and Cello improvisation. 

“They walked the whole night and all the next day too from morning till evening, but they did not get out of the forest, and were very hungry, for they had nothing to eat... And as they were so weary that their legs would carry them no longer, they lay down beneath a tree and fell asleep.” 


Psalm 21



“...Your hand will find out all your enemies;
your right hand will find out those who hate you.
You will make them as a blazing oven
when you appear.
The Lord will swallow them up in his wrath,
and fire will consume them.”

Psalm21 is not a religious work. There is no implication of a celebratory or ritual nature. The dominant and recurring element during the composition of the piece, is the relationship between creation (as a constantly evolving act), creator (of which we ignore existence and features) and creature (as a self-conscious subject that reflects on his nature). These three elements are the driving force around which the work unfolds.

[Available on CD Baby Music Store]
[Digital booklet: Unsorted_Tales.pdf]

Wednesday, July 25, 2012

Lis10er is now available in the App Store

We would like to introduce you our brand new iPhone app called Lis10er (pronounced Listener). It's a binaural audio augmented reality iPhone application that creates an “augmented soundspace”, warping and mixing in realtime the device live microphone with prerecorded imagined sounds and the elaborated version of the real-time input, designed by U.S.O. Project (Matteo Milani, Federico Placidi) and Tue Haste Andersen.

The app is a mobile installation that places the emphasis on the surfaces of the world in which we live. It contains 10 themes: each one is a carefully composed “virtual place”, where the sonic environment is encapsulated and transformed to invent a new reality, mixing "Live" sounds and their relative aural context with designed ones. 

Image

How It Works

Step 1: To get started you only need the earphones or any other external headset with microphone.

Step 2: With the wet/dry slider you can blend the amount of the input source with the processed output.

Step 3: Swipe left and right to switch among 10 themes.



Compatible with iPhone 3GS, iPhone 4, iPhone 4S. Requires iOS 5.0 or later.


What’s in the next versions:
  • New Themes
  • Non linear selection and crossfading between Themes
  • Background recording
  • iTunes file transfer
  • Soundcloud integration
  • Support for iPhone’s built-in microphone

Thursday, April 19, 2012

Unseen Noises [USO002]

24-bit/48kHz Royalty-free Sound Design Collection

Image


Unseen Noises is the second sound effects bundle created by sound designers and electronic composers Matteo Milani and Federico Placidi (aka Unidentified Sound Object - U.S.O. Project).

Electromagnetic informations are invisible and omnipresent. In every city, especially the big ones, an infinite number of electromagnetic waves is hidden: we can't hear them, but they're everywhere! We explored this invisible noise pollution transducing electromagnetic fields into audio signals with a telephone pickup: it acts like a radio antenna for hum and weird electromagnetic noises.

We plugged it into a SONOSAX SX-R4 recorder, moving it close to electrical devices - like a stethoscope - to locate interesting and curious sounds, just like LCD television, internet antennas, lighting systems, transformers, game consoles, tablet, electronic security systems, scanners, computer monitors and hard-drives, printers, navigation systems, fax machines...

All of the audio files have been embedded with metadata for detailed and accurate searches in your asset management software.

As the previous library, this collection has not been peak normalized, but loudness normalized. Through loudness normalization, the gain of a signal is modified so that the signal’s loudness level equals -23 LUFS. Loudness normalization helps us solve the problem where we wish to balance the loudness level of multiple sound files.

Here is what you get in "Unseen Noises":
  • Stereo Files (40 items)
  • Tab-delimited file (.txt)
  • Excel spreadsheet (.xls)
  • License Agreement (.pdf)
  • Artwork (.jpg)

Audio Format: Broadcast Wave Files (.wav)
Sample Rate: 48 kHz
Bit Depth: 24-bit
Size: 2.43 GB
Download size is 1.9 GB (compressed .rar archive)

Available on www.unidentifiedsoundobject.com

Friday, March 09, 2012

GRM pt.2: the birth of a concept

Daniel Teruggi wrote an interesting article about the Syter system at INA - GRM in the booklet for Archives GRM (CD 4). This whole CD is comprised of works created through Syter.

"To mark and celebrate the thirty years of the INA (Institut National de l'Audiovisuel), the GRM (Groupe des Recheches Musicales) has chosen to bring together an exceptional set of five compact discs, illustrating some of its most remarkable musical archives. These original works, which are often previously unpublished or have been dispersed throughout a host of other publications, are important because of the originality and audacity they testify to in the second half of the 20° century. Some listeners will be pleased to see that there are a number of illustrious composers here who, in the 1950s, frequented the studio of Pierre Schaeffer, and others will discover numerous musicians whose enthusiasm enabled this innovative musical genre to last throughout the following decades."
Emmanuel Hoog, président directeur générale de l'Ina

Daniel Teruggi - The time of real time

From the very beginning, music, whether vocal or instrumental, improvised or written, and up until the invention ol recording processes, was listened to at the precise moment it was produced. The twentieth century changed all that, First of all with the appearance of recording media, which made it possible to listen to sound in a place and at a time other than those at which it was originally produced; then by the widespread use of electricity, which made it possible to invent new instruments and new ways of imagining and making music. Concrete music, electronic music, electroacoustic music, acousmatic music or contemporary electronic musics are all testimony to the same ambition: using electrical, electronic and computer-based technologies to invent the sounds of music. The invention of sounds is the invention of new forms of music, of new ways of looking at music, and is the logical consequence of the new opportunities that technology continues to provide us with. Musicians began to use computer systems a long time ago (1958) in order to synthesise sounds and to develop computer programmes that would enable them to combine sounds into musical works. Progressively it became possible to record these sounds, to process them or to hybridise them with synthetic sounds.
Musical computer technology did not develop fast and was dependent on the way processors and data storage systems evolved; in 1958, a large computer in a research centre was necessary in order to produce a simple synthesised melody, which it was not even possible to record in the memory. These initial technical difficulties brought about the appearance of two concepts which could be described in a historical perspective, but which are often presented as if the were antagonistic: deferred time and real time. Deferred time described the way that the first computer systems were unable to produce an instantaneous result.
Between the moment at which the intention was expressed and the moment when its result become an audible phenomenon, there was always a certain lapse of time.
The user programmed a sound using software, defining its various parameters and timbre, and then the computer calculated the sound and, depending on the complexity of the calculation, produced the result ofter a given interval. The listening time was deferred with respect to intention time.
It was logical that the next technological objective was real time, a concept that describes the possibility of hearing a sound at precisely the some time as the intention to make it is expressed.
Moving over to real time required changes to the command tools. Deferred time was the result of a programming system whereby the user defined, using written language, the result he wished to obtain; moving over to real time made it possible to define the intentions instantaneously and to modify the result as it was being listened to.
Now, most sound production and generation systems work in real time, enabling the user, thanks to various interaction tools (keyboards, mice, screens) to control and modify the sounds created and heard. Nevertheless, in the field of musical creation, and for a relatively long time, this technological evolution was opposed on methodological grounds. Real time obliges the operator to act and react, depending on the result, in a way that is similar to that of the instrumentalist. For many composers, deferred time, because it separated the moment of conception from the moment of listening, created a distance that was necessary for reflection, a situation that is similar to instrumental composition, between the writing of a piece on paper, and its being played.

Image
[Daniel Teruggi @ Sonic Acts 2010 - courtesy Rosa Menkman]

  
Deferred time and real time in the GRM

At the beginning of the 1970s, the Groupe de Recherches Musicales began to experiment using computer technologies. At the time, the Group already had 20 years of experience, a major repertoire of musical works, a tradition for profound reflection on music and perception as well as innovative technological research. Little by little, therefore, work was undertaken to look at the possibilities that this new domain, which was already strong in the United States, could offer in France, where it was comparatively little known. Two projects were to follow one another, and then coexist, between 1975 and 1993: the first, from 1975 to 1987, concerned the development of deferred time sound processing tools, the "Studio 123 software programmes", developments that are dealt with in CD 3 of the GRM Archives set. The second project, the Syter system was a major technological development for musical computer programming, so original that its impact can still be felt in the development of processing tools today.
These two projects were vitally important in opening electroacoustic music up to composers from the instrumental world. The main successes of these two projects were to bring electroacoustic music out of the studio, making computer technology accessible, without needing programming skills, and making processing reliable and reproducible. The range of things it was possible to do to sound was considerably widened, using original and unheard of sound processing techniques. These two projects were a unique period for the GRM, the studios opened up to welcome composers with other ideas, concepts and points of view, the dialogue was rich and fruitful, and the understanding and analysis of the music being written there were enhanced.

The Syter project 

With the advent of computer technology, the first idea was to imagine a parametric control of machines using digital tools. For example, synthesisers, while remaining analogue in the way that the sound is generated, could be controlled by digital systems that would provide o greater precision in terms of frequency that traditional rotary buttons. It was thus that the first Syter was born, an acronym for: Synthése en temps réel (real time synthesis), and the objective of which was to build up a digital synthesis system based on a set of oscillators, controlled in real time by specialised gesture-based access or by external signals.
The first prototype that was built was relatively simple, since its only function was to control, in real time, the movements of a sound source between a number of loudspeakers. This prototype, with its delicate control system and laborious programming, was used in concert on 16 March 1977 for the creation of Cristal by Francois Bayle.
The designer of this tool and of its following versions was Jean-François Allouis, an engineer who arrived at the GRM in 1974, and who was fascinated by the potential of computer technology as applied to sound and music, and who had an uncanny inventiveness when it came to finding solutions to new problems and designing original systems. For this first concert, the acronym Syter become: Systéme temps réel (real time system), and was the starting point for a whole 5-year period of development during which Jean-François Allouis contributed to the setting up of the first GRM computer, oversaw the implementation of the deferred time processing system, built the Syter real-time sound processor and the input and output converters, developed programming software for the processor, built one of the first interactive real-time parameter control systems and programmed the first processing tools. In conjunction with computer scientist Jean-Yves Bernier and computer technician Richard Bulski, he needed to build and rebuild the system several times before the first full system was complete, in 1984. The system underwent very few modifications and additions, subsequent to that. Eight systems were built and sold, up until 1988. The software continued to evolve up until 1989, in particular thanks to the impetus of Hugues Vinet, who designed a digital mixing tool, providing the system with all the functions of a Studio. Two systems were in operation at the GRM until 1995, and around 100 works were composed in part or in whole using the system.

Related Posts: 

Wednesday, November 23, 2011

Sonic Screens 2011

Electroacoustic music concert

Imagean event by U.S.O. Project | Matteo Milani & Federico Placidi
in collaboration with O’ and DieSchachtel

Sound direction: Matteo Milani
Max/MSP programming: Federico Placidi
Visuals: Franz Rosati

O’ | via pastrengo 12 Milan | Italy

Saturday, November 26th - from 7 to 9 p-m.


Free admission

Sonic Screens aims to render the endless possibilities of life and its surroundings experienceable in our conscious activity, trying to deal with the possible infinites of the listening experience, both in their objective and manufactured dimensions. This journey related to the pure immersive listening will take advantage of Ambisonics sound diffusion practice, creating an immersive sound flow between different electroacoustic works by these selected international artists:

1. Benjamin Taylor - Keen Awareness
2. Daniel Blinkhorn - anthozoa
3. Daniel Courville - Variations on Vater unser im Himmelreich
4. Diana Salazar - Spindlesong
5. Jon Christopher Nelson - Just After The Rain
6. Josh Goldman - Hexagonal (Facets 1-6)
7. Kotoka Suzuki - Automata
8. Panayiotis Kokoras - Anechoic Pulse
9. Tae Hong Park - 48 13 N, 16 20 O

At the end of the evening, a/v live-set:

Franz Rosati - Theory of Vortex Sound [OUTFLUX]
PATHLINE #1

Matteo Milani & Federico Placidi

Milan/Rome based Matteo Milani and Federico Placidi (aka U.S.O. Project - Unidentified Sound Object) are sound artists whose work spans from digital music to electroacoustic improvisation. Unidentified Sound Object was born from the desire to discover new paths and non-linear narrative strategies in both aural and visual domains. The project includes several collaborations with visual artists and performers. Milani and Placidi are the co-founders of the label "Synesthesia Recordings", a repository of electroacoustic works. U.S.O. Project is a continuing evolving organism.

www.synesthesiarecordings.com


Franz Rosati

Franz Rosati is a sound and media artist, focusing his research on real-time A/V, Visual Music projects and installations following an aesthetic idea based on discontinuity of aural and visual patterns avoiding any kind of repetition through the use of chaos mathematics, generative and stochastic processes. He uses his own custom made software for real-time micro-montage and sound elaboration in the microscopic time scale to realize compositions and performances based on aural and visual matter’s costant metamorphosis. During the years, Franz Rosati has played in a large number of electroacoustic projects such as Franco Ferguson improvisors collective, Meccanica Ferma, Solderwire, GRIDSHAPE, developing his own approach to electroacoustic improvisation. In 2007 founded Nephogram [contemporary documents] collectives with Stefano Pala a.k.a. UKQWJB. He also teaches MaxMSP/Jitter for sound design, interactivity and multimedia, focusing in computer vision techniques in several Workshops and Art/Design Institutes, and developed Interactive Examples for Electronic Music and Sound Design, a book about sound theory and practice in MaxMSP.

www.franzrosati.com

Tuesday, May 31, 2011

Sonic Screens - call for multichannel works

Sonic Screens – environmental music listening sessions

Live mixing and spatialization by U.S.O. Project (Matteo Milani & Federico Placidi)


Call for multichannel works

U.S.O. Project is pleased to invite submissions of fixed media sound works for the second edition of “Sonic Screens”, a journey among different electroacoustic Soundscape compositions.

Sonic Screens is an annual event that will take place during two acousmatic evenings in Milan during Fall 2011.

Sonic Screens aims to render the endless possibilities of life and its surroundings experienceable in our conscious activity, trying to deal with the possible infinites of the listening experience, both in their objective and manufactured dimensions.

“Listening to the environment, contextualizing it objectively and creatively has always been a priority of the work of U.S.O. Project.
Free from any pseudo-environmental or socio-political implication, the continuous work on sampling, processing and transfiguration of found sound and carefully preserved in memory of a digital recorder, has always played a central role in our compositional practices.
U.S.O. defines Soundscape as the expressive and narrative richness that comes from the reciprocal and continuous interaction of multiple sound sources from the real world, and other phenomena which are perceptible and measurable only through proper and adequate transduction (electromagnetic signals, for example).
A Soundscape is also an opportunity for reflection and imagination that has little to share with the real world.
A Soundscape can be a place of the mind, a reminiscence of a future experienced in dreams, lands far away in space and time.” – Matteo Milani & Federico Placidi


Composers and sound artists are invited to submit multichannel works, up to 8 channels. The assignment of channels to speakers must be clearly indicated in the submission. Works of any duration will be considered although pieces of under 16 minutes will be given preference.

The performance will take advantage of Ambisonics sound diffusion practice, creating an immersive and uninterrupted sound flow between different works from selected international artists.

The material will be transcoded in real time to 2nd Order B-Format (via ICST Ambisonics Externals for MaxMSP).

The recordings of the concerts will be available for streaming and released in binaural format for headphone use. The ownership of the tracks remains to the authors.


Submissions need to include:
  • a stereo version of the piece
  • individual mono files for each channel
  • channel configuration
  • sample rate
  • program notes
  • brief biography

While the composers of the selected works are encouraged to attend the event, attendance is not required for a work to be presented.
There is no registration fee.
The deadline for submission of works is October 31st, 2011.


Material Submissions

Please send download links to your work using one of the many file delivery services (yousendit.com, sendspace.com, gigasize.com, wetransfer.com, etc) in .zip or .rar format. Please do not email file attachments.

Electronic submissions should be sent to:

[submissions at synesthesiarecordings dot com]

For more information, email contact:

[info at usoproject dot com]


Terms and Conditions

Each participant may submit up to two works.

[Date and Venue To Be Announced]



“The essential difference between an electroacoustic composition that uses pre-recorded environmental sound as its source material, and a work that can be called a soundscape composition, is that in the former, the sound loses all or most of its environmental context. In fact, even its original identity is frequently lost through the extensive manipulation it has undergone, and the listener may not recognise the source unless so informed by the composer. In the soundscape composition, on the other hand, it is precisely the environmental context that is preserved, enhanced and exploited by the composer.” – Barry Truax


Now online on SoundCloud:

Sonic Screens - pt.1 (binaural) by usoproject

Sonic Screens - pt.2 (binaural) by usoproject

Recorded 29th October, 2010 @ O' - no profit organization, during Live!iXem 2010 - Edition VII
International festival of music, mixed media and experimental electronic art - Milan

Tuesday, May 17, 2011

GRM Tools - pt.1: an interview with Emmanuel Favreau

by Matteo Milani - U.S.O. Project, May 2011 

GRM Tools is the result of more than 50 years of cutting-edge research and experimentation at the Groupe de Recherches Musicales de l'Institut National de l'Audiovisuel in Paris.
These plug-ins were realized by a succession of hardware and software engineers, who formulated the algorithms for the original GRM Tools in the 1990s. Over the years the GRM has focused on developing a range of innovative tools to treat and represent the sound.
The new GRM Tools Evolution is the latest powerful and imaginative  bundle of new algorithms for  sound processing. Three new instruments are available: Evolution, Fusion and Grinder. All works in the  frequency-domain and provide powerful ways to manipulate audio in real time. I had the privilege of interviewing Emmanuel Favreau, software developer at INA - GRM. Here we go!


Matteo Milani: How many people are part of the GRM development team at INA?

Emmanuel Favreau: We are two people, working full-time. Adrien Lefevre handles the Acousmographe. I’m on GRM Tools. We welcome regular students.


MM: Can you tell us a brief history of the GRM Tools from the origin until now?

EF: The first version of the GRM Tools was created by Hugues Vinet, who is now scientific director of IRCAM in Paris. This stand-alone version offered a couple of algorithms, using the Digidesign SoundAccelerator/Audiomedia III card. The user interface was made ​​with HyperCard. When I arrived at the GRM in 1994, we took the decision to convert the processing available in the stand-alone version of GRM Tools plugins to TDM for Digidesign Pro Tools III. Treatments were rearranged, some modified, others abandoned. The original GRM Tools Classic bundle dates from this era. Later, the evolution of treatments has been closely following the technological evolution: when the processors became powerful enough for real-time processing, Steinberg introduced the VST architecture and the Digidesign RTAS Pro Tools format. And finally, we developed the ST version - Spectral Transform - when computer processing power allowed us to calculate several simultaneous FFT in real time.

 
[...] Jean-Francois Allouis and Denis Valette pioneered the hardware development of SYTER (SYsteme TEmps Reel / Realtime System) with a series of prototypes produced during the late 1970s, leading in due course to the construction of a complete preproduction version in 1984. Commercial manufacture of this digital synthesizer commenced in 1985, and by the end of the decade a number of these systems had been sold to academic institutions.
Benedict Mailliard developed the original software for SYTER. By the end of the decade, however, it was becoming clear that the processing power of personal computers was escalating at such a rate that many of the SYTER functions could now be run in real-time using a purely software-driven environment. As a result, a selection of these were modified by Hughes Vinet to create a suite of stand-alone signal processing programs. Finally, in 1993, the commercial version of this software, GRM Tools, was released for use with the Apple Macintosh.
The prototypes for SYTER accommodated both synthesis and signal processing facilities, and additive synthesis facilities were retained for the hardware production versions of the system. The aims and objectives of GRM, however, were geared very much toward the processing of naturally generated source material. As a consequence, particular attention was paid to the development of signal processing tools, not only in terms of conventional filtering and reverberation facilities but also more novel techniques such as pitch shifting and time stretching.

[via Electronic and Computer MusicImage by Peter Manning]


MM: About GUI - 2DController. What is the origin of this pioneering, intuitive, but simple performer-instrument "link"?

EF: This type of interface has been widely used at the time of SYTER during the 80’s. It allowed us to regain "analog" access to a digital instrument. Indeed, even the manipulation of a slider with a mouse requires some attention (click in the right place, moving vertically or horizontally without mechanical guide, etc.). With the 2D interface, the entire surface of the screen becomes a controller. To obtain a result as soon as you click, the precision of movement is becoming necessary if you want to tune that.


MM: The mapping of parameters on multi-touch control surfaces free us from the use of a mouse and gives us an expressiveness never achieved before. What do you think of this new generation of controllers?

EF: Of course, these interfaces allow an overall and "analog" control which is not possible with the mouse (although the knob 2D mode or "elastic" are possible solutions to overcome the single pointer limitation). Since the engineering of the SYTER we proposed a system of "interpolator balls" to interpolate between different set of parameters arranged in a two-dimensional space. The multi-point control of such a device is natural: we need both hands to shape and transform the space.
 Image"Interpol" control screen of SYTER
[via DAFX: Digital Audio EffectsImage - Udo Zölzer, Xavier Amatriain]


MM: Is the SYTER still in use today in Paris?

EF: No, SYTER no longer works. It was composed of several elements (a PDP-11, large hard drives, a vector graphics terminal) which can not be sustained today.


MM: Host-based tools vs. custom DSP engines: will there be a winner or they will continue to peacefully coexist in the business?

EF: For the type of tool that we develop, the winner is clearly the host-based. For very large sessions with dozens of tracks and hundreds of plug-ins, DSP is now the best choice, but they could disappear with the diffusion of multi-core processors.


MM: How long did the Classic Bundle take to get ported from TDM to RTAS?

EF: It's hard to say because it was not done directly. I first made ​​the VST version, and then adapted the RTAS version. The algorithmic part posed no particular problems, the difficulties being rather on the side of the interface between the various plugins and hosts.


MM: How much research was needed to create the Spectral Transform bundle?

EF: The prototypes of the Spectral Transform have been fast enough to achieve. The basic algorithm is the phase vocoder, which has been well known for a long time. What took time was the interface design, the choice of parameters and their mutual consistency, stability and the whole robustness (i.e. avoid audio clicks and saturation of the values ​​of some parameters).


MM: What's the technology behind the bundles?

EF: If we leave aside the TDM - the processing code is written in 56000 assembly language, all plugins are written in C++. The processing codes are fully compatible between Mac and PC. In addition, the portability of the user interface is guaranteed by Juce. All development is done on Mac; PC adaptation is virtually automatic and requires minimal work.


MM: A description of version 3 and its new features: what goals have you achieved during this long period of software development?

EF: Having redesigned the interface and rewritten all the code allowed us to add some new features: resizing the window, MIDI control with automatic learning, agitation mode.
Agitation is a generalization of the Randomize, it can be applied to all parameters of random variations in amplitude and frequency control. Now all the GRM Tools are also available as standalone applications. This easily handles individual sounds, to make quick tests and become familiar with the treatments without having to use host daw and sequencers.


MM: How do you manage feedback from musicians and sound designers to improve sound quality and the graphical interface?

EF: The user feedback comes from various forums and from discussions with users and composers here at the GRM. In response to suggestions, plug-ins will be changed, some features will be added (but always in small numbers to ensure compatibility) or it will create a new treatment that may ultimately prove quite different from the original application. This is what happened to Evolution that comes from improving the freeze that can be achieved with FreqWarp.

Image
[GRM Tools Evolution @ Qwartz 7 - courtesy Alexandra Lebon]


MM: What are the most efficient methods of applications against piracy?

EF: There is none. Whatever the methods, they will be bypassed one day or another. We must find a solution that is not too heavy for the users, while allowing a minimum of protection. We chose the system of Pace iLok because it is very common in musical applications. The recently announced changes should make it more flexible to use.


Thanks for your time Emmanuel, keep up the good work!


[...] Any transformation, no matter how powerful, will never equal or surpass synthesis, if it fails to maintain a causal relationship between the sound resulting from the transformation and the source sound. The practice of sound transformation is not to create a new sound of some type by a fortunate or haphazard modification of a source, but to generate families of correlated sounds, revealing persistent strings of properties, and to compare them with the altered or disappeared properties.
In synthesis, the formalisation of the devices and resulting memorisable abstraction, offer a stable set of references which can be easily transposed from one environment to another. In sound transformation, no abstraction of the available results is possible and neither is generalisation. The result of an experiment is always the product of an operation and a particular sound to which this operation is applied. The composer must be able to add to the sum of knowledge by reproducing a previously proven experiment.
What makes the wealth and functionality of a system is the assembly and convergence of the whole, its ability at any moment to answer the questions imagined. Specific tools built for a single experiment, no matter how prestigious, are sterile if they cannot be applied to other purposes. - Yann Geslin




References:

[Digital Audio WorkstationImage by Colby Leider]
[sounDesign, a blog dedicated to the world of Sound and Audio Design]
[On GRM Tools 3, Part 1 - via designingsound.org]
[GRM Tools 3 review: a classic reborn]
[The GRM: landmarks on a historic route
[GRM's current team]
[GRM Tools Store]

You can also read my interviews and reviews on Computer Music Studio (italian only), a monthly magazine by Tecniche Nuove Editore. - Matteo Milani