Jump to content
Българският форум за музиканти

Recommended Posts

Отговорено

EAVnU9E.jpg


 


FTZ7ajF.jpg


 


2FzyyxQ.png


 


0D4PBWv.jpg


 


За съжаление краят на статията липсва, но информацията дотук заслужава да се види.


 


 


  • Like 1
Отговорено

Комплекти за домашно кино от VISATON


 


Конструктивни решения за DIY озвучителни тела на немската фирма Visaton.


Определението "за домашно кино" не е задължително. Много от конструкциите спокойно могат да се приложат за съвсем други цели, стига да се преценят правилно параметрите им.


 


3pPXZ55.png


  • Like 1
Отговорено (Редактирано)

Номограми за радиолюбители (DIY)


Брускин В.Я.


Удобни средства за избягване на сложни формули и пресмятания.


Особено са подходящи за хора намиращи се над откровенното копиране без разбиране, но все още недостигнали до сериозната наука.


Както пише в заглавието - за любители.


Има достатъчно раздели , касаещи нискочестотна техника, захранвания и други, директно свързани с тематиката на форума.


 


rakGZ6L.jpg


Редактирано от Parni_Valjak
  • Like 1
Отговорено
AES Press Release

The Audio Engineering Society Publishes Groundbreaking New Standard for 3D Audio

 

For Release: March 12, 2015

aes_logo_stacked_k_sm.png

 

 

The Audio Engineering Society is pleased to announce the recent publication of the AES69-2015 standard, which provides an important framework for the growing binaural and 3D personal audio industries. The standard, which describes the format and exchange of spatial acoustics files, is the product of the AES Standards Committee, the preeminent source of professional audio standards worldwide.
 
The AES69-2015 standard is seen as a boon to the evolving 3D audio field. Binaural listening is growing due to increased usage of smartphones, tablets and other individual entertainment systems that primarily present audio using headphones. An understanding of the way that the listener experiences binaural sound, expressed as head-related transfer functions (HRTF), opens the way to 3D personal audio. The lack of a standard for the exchange of HRTF data makes it difficult for developers to exchange binaural capture and rendering algorithms effectively. While 3D audio continues to gain popularity among end users, binaural listening could be the very first 3D audio vector with sufficient fidelity of HRTF.
 
The new AES69-2015 standard defines a file format to exchange space-related acoustic data in various forms. These include HRTF, as well as directional room impulse responses (DRIR). The format is designed to be scalable to match the available rendering process and is designed to be sufficiently flexible to include source materials from different databases.
 
This project was developed in AES Standards Working Group SC-02-08, with the writing group being led by Matthieu Parmentier and principal authors Piotr Majdak and Markus Noisternig. The standard builds upon an earlier project to define a spatially-oriented format for acoustics (SOFA), which aimed at storing HRTF data in a general way, capable of supporting any transfer-function data measured with microphone arrays and loudspeaker arrays.
 
The use of convolution-based reverberation processors in 3D virtual audio environments has also grown with the increase of available computing power. Convolution-based reverberators help guarantee an authentic and natural listening experience, but also depend on the acoustic quality of the applied directional room impulse response (DRIR). Many such issues have been of growing concern in the industry, as were discussed in depth at the recent AES 57th International Conference in Hollywood, CA, which addressed topics including immersive audio delivery standards, headphone design and performance, 3D audio in ambisonics, binaural audio, more.
 
The following requirements are supported:
  • Description of a measurement setup with arbitrary geometry; that is, not limited to special cases like a regular grid, or a constant distance.
  • Self-describing data with a consistent definition; that is, all the required information about the measurement setup must be provided as metadata in the file.
  • Flexibility to describe data of multiple conditions (listeners, distances, etc.) in a single file.
  • Predefined descriptions for the most common measurement setups, which are referred to as “conventions.”
 
AES Standards Committee Chair Bruce Olson states, “AES69 represents a fundamental piece of architecture for taking personal audio to a new level of performance. Using this, product developers will be able to take advantage of transfer-function databases from all over the world to produce a truly immersive 3D audio experience.”
 
AES Standards Committee
The AES Standards Committee is the organization responsible for the standards program of the Audio Engineering Society. It publishes a number of technical standards, informational documents and technical reports. Working groups and task groups with a fully international membership are engaged in writing standards covering fields that include topics of specific relevance to professional audio. Membership of any AES standards working group is open to all individuals who are materially and directly affected by the documents that may be issued under the scope of that working group.
 
Complete information, including scopes of working groups and project status is available at

 

Отговорено (Редактирано)

IfBAb7Z.png


Поглед към историята и развитието на радиолампите от позицията на 1965 година. Понеже повечето, да не кажа всички лампи са създавани преди тази година, темата е достатъчно актуална. 


Публикацията е общо 4 части .


 


The story of the Valve


Редактирано от Parni_Valjak
Отговорено (Редактирано)

James Clerk Maxwell and his four equations of electromagnetic fields

Posted by David Herres

Clicking on Google Books and typing “James Clerk Maxwell” will quickly bring you to A Treatise on Electricity And Magnetism (1873). In this awesome volume, Maxwell synthesizes and thoroughly rationalizes the work of Faraday and other researchers. True, Maxwell’s outer limits were challenged by the Michelson-Morley experiment, which failed to detect the lumeniferous aether that Maxwell’s field theory would seem to require. Albert Einstein’s later counter-intuitive interpretation and the far-out implications of quantum mechanics notwithstanding, Maxwell’s four partial differential equations still unify our understanding of light and electromagnetic radiation as phenomena that occupy a single spectrum.

5l73YJQ.jpg

The first equation comes from Ampere’s law and denotes conductive current J and displacement current D induces a magnetic field.

The second equation is from Faraday’s law and says the variation in the ,magnetic field induces the electrical field.

The third and fourth equations are from the Gaussian Theorem for the magnetic and electrical field respectively.

Maxwell admired and respected those who went before, going back as far as Thales of Miletus. His work primarily built upon experimental results obtained by Charles-Augustin Coulomb and Michael Faraday. Coulomb by chance had discovered that a magnetized needle is deflected when current passes through a nearby conductor. Faraday had built upon this observation, amassing a large amount of experimental data, which he gave to the world along with incisive interpretations that were meaningful to subsequent researchers including Maxwell. Faraday lacked the mathematical expertise as well as the inclination to conceptualize the phenomena that he described so well. Maxwell, with his unifying field theory and equations, can be seen as completing the work of his predecessors, establishing a high plateau of accurate theory, fully quantified.

This line of thought began at King’s College in 1862, where Maxwell found by calculation that electromagnetic force propagates at approximately the speed of light. Maxwell reasoned that the great rate at which both entities traverse vast distances through space could not be regarded as coincidence. The logical conclusion is that light and electromagnetism are actually the same except for frequency.

Maxwell was proficient in experimentation and theorization as well. This was evident in his work on the motion of gas molecules. His approach turned to statistics and probability, previously used more in the social sciences, to analyze these motions. The result was the Maxwell-Boltzmann theory of distribution of molecular energies.

In connection with Maxwell’s idea of the propagation of light and electromagnetic force, Albert Einstein, who kept a portrait of Maxwell on the wall of his study, had this to say:

Since Maxwell’s time, physical reality has been thought of as represented by continuous fields, and not capable of any mechanical interpretation. This change in the conception of reality is the most profound and the most fruitful that physics has experienced since the time of Newton.

The old imagery of electricity as moving through wires like fluid moving through pipes, was overthrown in favor of abstract mathematical models, and this new style of thinking made possible Einstein’s Theory of Relativity and the related but not yet compatible odd notions of quantum mechanics.

Редактирано от Parni_Valjak
Отговорено
Why Not Wye? When Combining Two Signals Into One Is Not A Good Idea
Anything that can be hooked up wrong, will be. You-know-who said that, and she was right...

 

March 24, 2015, by Dennis A. Bohn

7wEbjvE.jpg
NewRaneBugMarch2015.jpg
This article is provided by Rane Corporation.

 
Wye-connectors (or “Y”-connectors, if you prefer) should never have been created. Anything that can be hooked up wrong, will be. You-know-who said that, and she was right.

A wye-connector used to split a signal into two lines is being used properly; a wye-connector used to mix two signals into one is being abused and may even damage the equipment involved.

Here is the rule: Outputs are low impedance and must only be connected to high impedance inputs—never, never tie two outputs directly together—never.

If you do, then each output tries to drive the very low impedance of the other, forcing both outputs into current-limit and possible damage. As a minimum, severe signal loss results.

“Monoing” Low End
One of the most common examples of tying two outputs together is in “monoing” the low end of multiway active crossover systems. This combined signal is then used to drive a subwoofer system.

Since low frequencies below about 100 Hz have such long wavelengths (several feet), it is very difficult to tell where they are coming from (like some of your friends). They are just there—everywhere.

Due to this phenomenon, a single subwoofer system is a popular cost-effective way to add low frequency energy to small systems.

So the question arises as how best to do the monoing, or summing, of the two signals? It is done very easily by tying the two low frequency outputs of your crossovers together using the resistive networks described below. You do not do it with a wye-cord.

Summing Boxes
Figure 1 shows the required network for sources with unbalanced outputs. Two resistors tie each input together to the junction of a third resistor, which connects to signal common. This is routed to the single output jack.

vWzS07z.jpg
Figure 1. Unbalanced Summing Box

The resistor values can vary about those shown over a wide range and not change things much. As designed, the input impedance is about 1k ohms and the line driving output impedance is around 250 ohms.

The output impedance is small enough that long lines may still be driven, even though this is a passive box. The input impedance is really quite low and requires 600 ohm line-driving capability from the crossover, but this should not create problems for modern active crossover units.

The rings are tied to each other, as are the sleeves; however, the rings and sleeves are not tied together. Floating the output in this manner makes the box compatible with either balanced or unbalanced systems.

It also makes the box ambidextrous: It is now compatible with either unbalanced (mono, 1-wire) or balanced (stereo, 2-wire) 1/4-inch cables.

Using mono cables shorts the ring to the sleeve and the box acts as a normal unbalanced system; while using stereo cables takes full advantage of the floating benefits.

Stereo-to-Mono Summing Box
Figure 2 shows a network for combining a stereo input to a mono output. The input and output are either a 1/4-inch TRS, or a mini 1/8-inch TRS jack. The comments regarding values for Figure 1 apply equally here.

vhImqAb.jpg
Figure 2. Stereo-to-Mono Summing Box

Balanced Summing Boxes
Figures 3and 4 show wiring and parts for creating a balanced summing box. The design is a natural extension of that appearing in Figure 1.

uY0yr6j.jpg
Figure 3. Balanced summing box using XLR connectors
NZ4c0Es.jpg
Figure 4. Balanced summing box using 1/4-inch TRS connectors

Here both the tip (pin 2, positive) and the ring (pin 3, negative) tie together through the resistive networks shown.

Use at least 1 percent matched resistors. Any mismatch between like-valued resistors degrades the common-mode rejection capability of the system.

Termites In The Woodpile
Life is wonderful and then you stub your toe. The corner of the dresser lurking in the night of this Note has to do with applications where you want to sum two outputs together and you want to continue to use each of these outputs separately.

In other words, if all you want to do is sum two outputs together and use only the summed results (the usual application), skip this section.

The problem arising from using all three outputs (the two original and the new summed output) is one of channel separation, or crosstalk. If the driving unit truly has zero output impedance, then channel separation is not degraded by using this summing box.

However, when dealing with real-world units you deal with finite output impedances (ranging from a low of 47 ohms to a high of 600 ohms).

Even a low output impedance of 47 ohms produces a startling channel separation spec of only 27 dB, i.e., the unwanted channel is only 27 dB below the desired signal. (Technical details: the unwanted channel, driving through the summing network, looks like 1011.3 ohms driving the 47 ohms output impedance of the desired channel, producing 27 dB of crosstalk.)

Now 27 dB isn’t as bad as first imagined. To put this into perspective, remember that even the best of the old phono cartridges had channel separation specs of about this same magnitude.

Therefore stereo separation is maintained at about the same level as a high-quality hi-fi home system of the 1970s.

For professional systems this may not be enough. If a trade-off is acceptable, things can be improved.

If you scale all the resistors up by a factor of 10, then channel separation improves from 27 dB to 46 dB.

As always though, this improvement is not free. The price is paid in reduced line driving capability.

The box now has high output impedance, which prevents driving long lines. Driving a maximum of 3000 pF capacitance is the realistic limit. This amounts to only 60 feet of 50 pF/foot cable, a reasonable figure.

So if your system can stand a limitation of driving less than 60 feet, scaling the resistors is an option for increased channel separation.

Presented with permission from Rane Corporation.

 

 

 

  • Like 2
Отговорено
In The Studio: Mid-Side Microphone Recording Basics
An incredibly useful method to attain ultimate control of the stereo field in your recordings...

 

September 25, 2014, by Daniel Keller

PlxZzEn.jpg

Courtesy of Universal Audio.

 

When most people think of stereo recording, the first thing that comes to mind is a matched pair of microphones, arranged in a coincident (XY) pattern. It makes sense, of course, since that’s the closest way to replicate a real pair of human ears.

But while XY microphone recording is the most obvious method, it’s not the only game in town. The Mid-Side (MS) microphone technique sounds a bit more complex, but it offers some dramatic advantages over standard coincident miking.

If you’ve never heard of MS recording, or you’ve been afraid to try it, you’re missing a powerful secret weapon in your recording arsenal.

More Than Meets The Ears
Traditional XY recording mimics our own ears. Like human hearing, XY miking relies on the time delay of a sound arriving at one input milliseconds sooner than the other to localize a sound within a stereo field.

It’s a fairly simple concept, and one that works well as long as both mics are closely matched and evenly spaced to obtain an accurate sonic image.

One of the weaknesses of the XY microphone technique is the fact that you’re stuck with whatever you’ve recorded. There’s little flexibility for changing the stereo image once it’s been committed to disk or tape. In some cases, collapsing the tracks into mono can result in some phase cancellation.

The MS technique gives you more control over the width of the stereo spread than other microphone recording techniques, and allows you to make adjustments at any time after the recording is finished.

Mid-Side microphone recording is hardly a new concept. It was devised by EMI engineer Alan Blumlein, an early pioneer of stereophonic and surround sound. Blumlein patented the technique in 1933 and used it on some of the earliest stereophonic recordings.

The MS microphone recording technique is used extensively in broadcast, largely because properly recorded MS tracks are always mono-compatible. MS is also a popular technique for studio and concert recording, and its convenience and flexibility make it a good choice for live recording as well.

What You Need
While XY recording requires a matched pair of microphones to create a consistent image, MS recording often uses two completely different mics, or uses similar microphones set to different pickup patterns.

Zo9wXHC.jpg

The “Mid” microphone is set up facing the center of the sound source. Typically, this mic would be a cardioid or hypercardioid pattern (although some variations of the technique use an omni or figure-8 pattern).

The “Side” mic requirement is more stringent, in that it must be a figure-8 pattern. This mic is aimed 90 degrees off-axis from the sound source. Both mic capsules should be placed as closely as possible, typically one above the other.

How It Works
It’s not uncommon for musicians to be intimidated by the complexity of MS recording, and I’ve watched more than one person’s eyes glaze over at an explanation of it.

But at its most basic, the MS technique is actually not all that complicated. The concept is that the Mid microphone acts as a center channel, while the Side microphone’s channel creates ambience and directionality by adding or subtracting information from either side.

The Side mic’s figure-8 pattern, aimed at 90 degrees from the source, picks up ambient and reverberant sound coming from the sides of the sound stage.

Since it’s a figure-8 pattern, the two sides are 180 degrees out of phase. In other words, a positive charge to one side of the mic’s diaphragm creates an equal negative charge to the other side. The front of the mic, which represents the plus (+) side, is usually pointed to the left of the sound stage, while the rear, or minus (-) side, is pointed to the right.

XjG3yDP.jpg
Mid-Side recording signal flow.

The signal from each microphone is then recorded to its own track. However, to hear a proper stereo image when listening to the recording, the tracks need to be matrixed and decoded.

Although you have recorded only two channels of audio (the Mid and Side), the next step is to split the Side signal into two separate channels. This can be done either in your DAW software or hardware mixer by bringing the Side signal up on two channels and reversing the phase of one of them. Pan one side hard left, the other hard right. The resulting two channels represent exactly what both sides of your figure-8 Side mic were hearing.

Now you’ve got three channels of recorded audio– the Mid center channel and two Side channels – which must be balanced to recreate a stereo image. (Here’s where it gets a little confusing, so hang on tight.)

MS decoding works by what’s called a “sum and difference matrix,” adding one of the Side signals—the plus (+) side—to the Mid signal for the sum, and then subtracting the other Side signal—the minus (-) side—from the Mid signal for the difference.

If you’re not completely confused by now, here’s the actual mathematical formula:

Mid + (+Side) = left channel
Mid + (-Side) = right channel

Now, if you listen to just the Mid channel, you get a mono signal. Bring up the two side channels and you’ll hear a stereo spread. Here’s the really cool part—the width of the stereo field can be varied by the amount of Side channel in the mix!

Why It Works
An instrument at dead center (0 degrees) creates a sound that enters the Mid microphone directly on-axis.

But that same sound hits the null spot of the Side figure-8 microphone. The resulting signal is sent equally to the left and right mixer buses and speakers, resulting in a centered image.

An instrument positioned 45 degrees to the left creates a sound that hits the Mid microphone and one side of the Side figure-8 microphone.

Because the front of the Side mic is facing left, the sound causes a positive polarity. That positive polarity combines with the positive polarity from the Mid mic in the left channel, resulting in an increased level on the left side of the sound field.

Meanwhile, on the right channel of the Side mic, that same signal causes an out-of-phase negative polarity. That negative polarity combines with the Mid mic in the right channel, resulting in a reduced level on the right side.

An instrument positioned 45 degrees to the right creates exactly the opposite effect, increasing the signal to the right side while decreasing it to the left.

What’s The Advantage?
One of the biggest advantages of MS recording is the flexibility it provides. Since the stereo imaging is directly dependent on the amount of signal coming to the side channels, raising or lowering the ratio of Mid to Side channels will create a wider or narrower stereo field.

The result is that you can change the sound of your stereo recording after it’s already been recorded, something that would be impossible using the traditional XY microphone recording arrangement.

Try some experimenting with this—listen to just the Mid channel, and you’ll hear a direct, monophonic signal. Now lower the level of the Mid channel while raising the two Side channels.

As the Side signals increase and the Mid decreases, you’ll notice the stereo image gets wider, while the center moves further away. (Removing the Mid channel completely results in a signal that’s mostly ambient room sound, with very little directionality – useful for effect, but not much else.) By starting with the direct Mid sound and mixing in the Side channels, you can create just the right stereo imaging for the track.

Another great benefit of MS miking is that it provides true mono compatibility. Since the two Side channels cancel each other out when you switch the mix to mono, only the center Mid channel remains, giving you a perfect monaural signal.

wo8zo25.jpg
Mid-Side recording signal flow.

And since the Side channels also contain much of the room ambience, collapsing the mix to mono eliminates that sound, resulting in a more direct mix with increased clarity. Even though most XY recording is mono compatible, the potential for phase cancellation is greater than with MS recording. This is one reason the MS microphone technique has always been popular in the broadcast world.

Other Variations
While most MS recording is done with a cardioid mic for the Mid channel, varying the Mid mic can create some interesting effects. Try an omni mic pattern on the Mid channel for dramatically increased spaciousness and an extended low frequency response.

Experimenting with different combinations of mics can also make a difference. For the most part, both mics should be fairly similar in sound. This is particularly true when the sound source is large, like a piano or choir, because the channels are sharing panning information; otherwise the tone quality will vary across the stereo field.

For smaller sources with a narrower stereo field, like an acoustic guitar, matching the mics becomes less critical. With smaller sources, it’s easier to experiment with different, mismatched mics. For example, try a brighter sounding side mic to color the stereo image and make it more spacious.

As you can see, there’s a lot more to the MS microphone technique than meets the ear, so give it a try. Even if the technical theory behind it is a bit confusing, in practice you’ll find it to be an incredibly useful method to attain ultimate control of the stereo field in your recordings.

Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.

 

  • Like 1
Отговорено
The Father of the Digital Synthesizer
Mar 23, 2015
uiFA1wP.png
 

“I was aware that I was probably the first person to ever hear these sounds, and that what I was hearing was something musical that had probably never been heard by anyone before — at least, not by anyone on this planet.”

— John Chowning, Inventor of FM Synthesis

 

Long before Stanford University was considered a technology powerhouse, its most lucrative patent came from an under-spoken composer in its music department. Over the course of two decades, his discovery, "frequency modulation synthesis," made the school more than $25 million in licensing fees.

But more importantly, FM synthesis revolutionized the music industry, and opened up a world of digital sound possibilities. Yamaha used it to build the world’s first mass-marketed digital synthesizer — a device that defined the sound of 80s music. In later years, the technology found its way into the sound cards of nearly every video game console, cell phone, and personal computer.

Despite the patent’s immense success, its discoverer, Dr. John Chowning, a brilliant composer in his own right, was passed over for tenure by Stanford for being “too out there.” In Stanford’s then-traditional music program, his dabblings in computer music were not seen as a worthy use of time, and he was largely marginalized. Yet by following his desire to explore new frontiers of audio, Chowning eventually recontextualized the roles of music and sound, found his way back into the program, and became the department chair of his own internationally-renowned program.

This is the story of an auditory pioneer who was unwilling to compromise his curiosity — and who, with a small group of gifted colleagues, convinced the world that computers could play an important role in the creation of music.

Echoes in Caves

 

gSe5faX.png

John Chowning in his youth

John M. Chowning was born in the Autumn of 1934, just as New Jersey’s Northern Oak leaves were turning yellow. In the thralls of the Great Depression, the Chownings did everything they could to provide a modest household for their two sons and daughter — though it was a space “devoid of music.”

Despite this, the young Chowning felt mystically drawn to sound. “I loved caves as a child,” he reminisces. “I would go hiking up in the Appalachian Mountains, go into these caves, and just listen to the echoes. They were mysterious, magical places.”

It was the world’s natural acoustics that inspired Chowning to take up the violin at the age of eight, though the instrument never quite captivated him. After migrating to a Wilmington, Delaware high school in the 1940s, “a great music teacher” exposed him to his true love: percussion.

Fully versed in reading sheet music, Chowning was initially enlisted to play the cymbals, but soon graduated to the drums. This became his “full-hearted passion,” and before long, he was a competent  jazz percussionist. “My whole world,” he says, “was music, music, music.”

Upon graduating, Chowning served in the Korean War and, due to his skill, was enlisted as a drummer in one of the U.S. Navy’s jazz bands. “There were so many good musicians there who really pushed me to perform — musicians who were, at the start, much better than I,” he admits. “There was a lot of pressure for me to get my chops up.” For three years, he toured Europe with the group, catering to the culture’s growing interest in the newer movements of jazz.

L7Faea0.jpg

He returned to the U.S. with a free college education on the GI bill, and attended Wittenberg University, a small liberal arts school in Ohio, at the behest of his father. Here, he immersed himself in the “contemporary composers” — Béla Bartók, Igor Stravinsky, Pierre Boulez, and the like. “I got very comfortable playing by myself,” he adds. “Improvisation was my path to new music.”

A U.S. Navy Jazz band (c.mid-1940s)

It was a path that would eventually lead him back to Europe.

After graduating In 1959, Chowning wrote a letter to Nadia Boulanger, a famed French composer who taught many of the 20th century’s best musicians, and expressed his great interest in studying under her. Chowning was accepted, and he and his wife moved to France where he joined 40 other gifted students . For the next two years, he met with her once a week for intensive studies. “It was a very fruitful time for me,” he recalls. “I learned a lot about harmony and counterpoint.”

But Chowning’s biggest revelation — the one that changed his life — came when he attended Le Domaine Musical, an “avant-garde” concert series in Paris  that often featured experimental music:

While Chowning sat awestruck, the rest of the attendees — mostly “traditionally-minded” musicians and students — “booed, screamed and whistled in disapproval after the piece was over.” Though electronic music was just emerging in Europe at the time, it wasn’t yet popular or widely accepted, especially among trained musicians.

“Some of the music involved loudspeakers. Hearing what some of these composers were doing was life-changing — Karlheinz Stockhausen had a four-channel electronic piece involving [pre-recorded] boys’ voices. The spatial aspects caught my attention: that one could create, with loudspeakers, the illusion of a space that was not the real space in which we were listening.

In his next meeting with Boulanger, Chowning sheepishly admitted his interest in “loudspeaker music,” expecting to be ridiculed. Instead, the well-versed teacher encouraged him to pursue it. But as Chowning soon learned, electronic music was “highly dependant on special studios with technical knowhow.” Though he yearned to reproduce it and explore it, he did not have the financial or technical  ability to do so.

“When I heard this music in Paris, I thought, ‘I could create music if I could control those loudspeakers,’” recalls Chowning. “But it was disappointing to learn they were dependant on such high tech.”

Stanford’s Misfit Composer

With no way to pursue his newfound interest in electronic music, Chowning returned to the United States, enrolled in the music doctoral program at Stanford University, and became involved as a percussionist in the school’s symphony. Gradually, he began to resign himself to a more traditional repertoire.

Then, in the winter of 1963, during Chowning’s second year of studies, a fellow percussionist handed him a page ripped out of Science magazine. Chowning hardly glanced at it before stuffing it in his pocket — but two weeks later, he rediscovered it and gave it a read. The article,“The Digital Computer as a Musical Instrument,” was written by a young scientist at Bell Laboratories named Max Mathews. At first, Chowning couldn’t really make sense of it.

“I had never even seen a computer before, and I didn’t understand much,” admits Chowning. “But there were a couple of statements that got my attention — especially that a computer was capable of making any conceivable sound.”

mNzqKCK.png

Schematics from Mathews’ paper; Stanford Special Collections Library

The article was accompanied by several diagrams that were, at first, beyond Chowning’s comprehension, but his curiosity compelled him and he soon pieced together an understanding:

“A computer would spit out numbers  to a digital analog converter, which would then convert numbers into voltage proportionally. The voltage went to loudspeaker...It made me think back to all those big studios in Europe. I thought, ‘If I could learn to generate those numbers and get access to a computer and a loudspeaker, none of that expensive equipment would be required!’”

>>>> PART2

  • Like 1
Отговорено
 

PART2

 

Though Chowning was a “mere” percussionist with no discernable electronics skill, he didn’t flounder. Instead, he decided to take advantage of Stanford’s “pretty good computers” by enrolling in a programming class. “I had to prove I could do it,” he says, “and it really wasn’t too difficult.” Using a bulky, but then-new Burroughs B-5500, he learned to code in ALGOL.


UjKmA0c.png



A Burroughs B-5500 system: an early computer enlisted by Chowning for music



By the Summer of 1964, he’d acquired some basic proficiency with the machine and language, and decided to journey to Murray Hill, New Jersey to meet Max Matthews, the man who’d written the paper that had inspired him. When Chowning arrived, Matthews was pleasantly surprised, and took the curious musician under his wing. Chowning recalls:



“I told him that I wanted to use his program, ‘Music 4,’ and he gave me a big box of punch cards. Each represented a particular waveform — sinusoidal, triangular wave, and whatnot. Then, another card would tell computer how to connect these, and modulate frequencies. You could generate thousands and thousands of periods of a sine wave with just a couple of punch cards.”



At the time, Stanford was far from the risk-taking institution it is lauded as today — and its music program was especially traditional and rigid. As Chowning had experienced in France, most of his college colleagues scoffed at the unfamiliar, foreign concepts of computer music. 


“It was against what the department said music was; they said I was dehumanizing music!” laughs Chowning. “My response was, ‘Perhaps it’s the humanization of computers.’”


Despite the music department’s backlash, when Chowning returned to Stanford with Matthews’ box full of punch cards, he found a supportive mentor in composition professor Leland Smith:


“[Leland Smith] was the one person in the music department who said, ‘Go ahead and tell me what you learn,’” recalls Chowning. “It was a very traditional department at the time, with a lot of interest in historical musicology and performance practice; what I was doing was pretty far from the central interest of the department, but Leland encouraged it.”


Just as Chowning delved into the relatively unknown world of computed music, Smith took off to Europe for a one-year sabbatical. But before he left, he made Chowning promise him one thing: “Show me what you’ve learned when I return.” 


The Discovery of FM Synthesis


k3HZivA.png


Years before, when Chowning was studying in Paris, he had come across Bird in Space, a bronze sculpture by Romanian artist, Constantin Brâncuși. “I remember looking up at this beautiful, modernist work: simple elegant, but complicated,” he reminisces. “All the lines extended one’s eyes into a space that was absolutely gripping in its effect.”


He was driven, in that moment, to “create spatial illusions” — to create sounds that would move like the lines in Bird in Space.


In the late months of 1964, while Leland Smith was abroad, Chowning had an idea: he’d create a “continuum” to move sounds in a 360-degree space. At the time, not much was known about why or how sounds moved in space; Chowning’s first challenge was to create sounds that “not only had angular distribution, but also radial distance.” In order to achieve this, he realized that he’d need to fully immerse himself in acoustics, as well as the science behind auditory perception. 


For the next two years, the rogue musician hit the books. “I had to learn much about audio engineering and perception, psychoacoustics, and the cognition of sound (the difference between something loud and close, and loud and far),” says Chowning. “I did it all so that I could accomplish my composition goals.”


***


By 1966, Chowning’s time as a Stanford Ph.D. student had come to a close. With strong references from his adviser, Leland Smith, and others in the department, he joined the staff as an assistant professor of composition.


Increasingly, Chowning spent much of his free time in the “dungeons” of Stanford’s artificial intelligence lab, where he could access computers and analyze the properties of the sounds he was working with. It was here, late one night in the Autumn of 1967, that Chowning had an unintentional breakthrough:



“I was experimenting with very rapid and deep vibrato [a music effect characterized by a rapid, pulsating change in pitch]. As I increased the vibrato in speed and depth, I realized I was no longer hearing instant pitch and time.”



To put this in digestible terms, any given sound — say a B-note on a violin — has a distinctive timbre (identifying tone quality), and produces a sound wave. Sounds can be manipulated with certain effects, which are either natural (playing an instrument in a giant room provides reverberation and/or echo), or imposed (by manipulating a note on a violin, one can produce vibrato, or a wavering sound). The sound cards in Chowning’s possession were able to reproduce these effects digitally.


iuN3ajB.png


Chowning found that by using two simple waveforms — one, the carrier, the frequency of which was modulated by the other — he could create a very rapid “vibrato” capable of producing complex, harmonic or inharmonic tones depending on the waveforms’ frequencies and the depth of the modulation. He called this “frequency modulation synthesis,” or FM synthesis. The sounds this method produced were entirely foreign:



“I was aware that I was probably the first person to ever hear these sounds, that what I was hearing was something musical that had probably never been heard by anyone before — at least, not by anyone on this planet.”



“It was 100% an ear discovery: I knew nothing about the math behind it,” adds Chowning. “It was a product of my musical training.”


Though he instantly understood  the gravity of what he’d discovered, Chowning realized that, in order to fully grasp its applications, he’d need to understand  the math that was making these sounds possible. This was a problem: at 30 years of age, his last math course had been freshman algebra — and even then, he’d had to “beg for a passing grade.”


But Chowning was driven forward by a deep-rooted desire to explore music, to reach into the mires of unexplored sound.  And he had help: first from his “angel,” David W. Poole, an undergraduate math major who taught him how computers worked, and then from the scientists and engineers at Stanford’s Artificial Intelligence (AI) Lab. “There was excitement in the discovery,” he admits, “but the potential of using it in my music is what drove me — not discovery, or invention.”


So, the music professor spent innumerable hours delving into math books, and consulting these researchers in the AI Lab. “Gradually,” he says, “I learned to understand the FM equation through programming and math, and learned that it had an importance not seen by many people.”


g2lzvh9.png


The “importance” was that FM synthesis could be applied to produce highly accurate digital replications of real instruments. More boldly put, it could open up musicians to a new world of sound customization.


But Chowning couldn’t neglect the main duties of his job. Like all professors in the music department, he was required to compose regularly, with the expectation that his work would receive peer recognition. He also had teaching duties to maintain, which consumed much of his time. While juggling his responsibilities, he spent the next four years working on FM synthesis, and replicating the sounds of various instruments.


>>>> PART3


  • Like 1
Отговорено
PART3

 

Dr. Chowning’s Stanford Experience

By 1970, Chowning had succeeded in using FM synthesis to mimic various tones — drum sounds, vocals, brass — in a somewhat rudimentary form. When he passed his finding along to Max Mathews, the Bell Labs scientist  he’d been inspired by, Mathews “fully understood” the importance of the discovery; with the endorsement of John Pierce, then-director of research at Bell Labs, Mathews suggested that Chowning apply for a patent.


In those days, innovative Stanford professors had a choice: they could develop their inventions independently, and assume all of their own financial and legal risk, or they could  sign it over to Stanford’s Office of Technology Licensing (OTL). The OTL, which Stanford had just created, was the safer bet for Chowning: it would absorb all of the risk, bear the monetary burden of trying to license the technology (some $30-40,000), and would allow him to be involved in the process. The downside was that the university would get the lion’s share of the income from the patent, and would only pass a small sum on to the inventor himself.


“I didn’t want to deal with lawyers — I wanted to do my music,” defends Chowning. “I didn’t care about the money as much as I cared for my compositions. It was natural for me to say, ‘Please take it.’”


For a transactional fee of $1, Chowning signed the patent for FM synthesis over to Stanford’s OTL, which then began the long process of courting instrument companies.


Meanwhile, Chowning bunkered down and focused on integrating his discovery into his own music. Sabelithe,  a piece which he’d begun in 1966, but completed in 1971, was the first composition to ever feature FM synthesis. If you listen closely between the 4:50 and 5:10 mark, you’ll hear a drum-like tone gradually transmogrify into a trumpet sound:


https://youtu.be/5MBRF8Mqj0s


A year later, he presented Turenas, the first electronic composition to feature the “illusion of sounds” moving in a 360-degree space. “Normally when a composer puts up a work, there are a couple string quartets , a symphony or two, chamber music,” says Chowning. “I had very little — just a computer and some speakers.” 


Though Chowning remembers the day of the composition’s premiere as “the realization of [his] dreams for many years,” his contemporaries weren’t as smitten. Years before, as a young student in Paris, Chowning had witnessed a room full of traditional composers boo an electronic musician’s performance; now, he was the man being scrutinized, and though the attendees weren’t the rowdy type, they exhibited the same breed of non-acceptance for what this new music. “What I was hearing in Turenas was not what the composers who were asked to come evaluate my work were hearing," he says. "What sense does sound moving in space make to people used to orchestral, traditional sounds? Universities are conservative — they move, but slowly — and it’s not popular to deviate from tradition.”


https://youtu.be/kSbTOB5ft5c


Despite his important output in the early 1970s (which also included an extensive academic paper mapping out FM synthesis and its exciting implications), Chowning soon found himself on the chopping block.


Seven years into his role as an assistant professor, it was time for him to take his sabbatical. “During this time, Stanford either promotes you or tells you to find work elsewhere,” says Chowning. “While I was on sabbatical, I was told I would not be teaching at Stanford anymore.” Though a reason for this decision was never provided, Chowning admits that the copious time he spent “developing a computer music system” was not perceived as a value add by the institution.


At first, this was devastating news for the young academic:


As if on cue, the famous French composer Pierre Boulez contacted Chowning and offered him an advisory position in Paris. Boulez was, at the time, “a major figure in music” — the conductor for both the New York Philharmonic, and the BBC Orchestra — and had just been commissioned by France’s prime minister to create a national institution for experimental  music research. It was his hope that the resulting project, IRCAM, would be in the interest of Chowning.


“It was a big problem when I was let go from Stanford. I had a young family and I had to figure out how I was going to support them. But I still felt like I had to pursue what I started. It wasnt worth giving up. I was in a digital world, but the whole process was intensely musical. Programming and creating sounds, and figuring out how the two relate — all of that was, for me, the point.”


Chowning jumped at the opportunity, and soon found himself back in Paris, assisting in the development of the program.


***


As Chowning revelled in his new role, Stanford struggled to lease the FM synthesis patent. They approached all of the major organ companies, but none of them had the technical ability to grasp the discovery’s implications. Chowning’s discovery was ahead of the industry’s learning curve.


“Hammond, Wurlitzer, Lowry — they’d come around take a listen, and say, ‘That sounds good!’” recalls Chowning, who learned of these interactions later on. “They understood the abstract, but had no knowledge of the digital domain or programming. All said no.”


As a last resort, Stanford contacted Yamaha, a Japanese musical instrument manufacturer which, at the time, had already begun investigation into the digital domain for a distant future. And as luck would have it, the company gave the patent a chance.


“Yamaha sent out a young engineer to Palo Alto,” Chowning says, “and in ten minutes, he understood [our technology].” In fact, Yamaha understood it so well that they decided to sign a 12-month licensing agreement and figure out if it was something that could be beneficial to them commercially.


yamahaletter.png



An early negotiation letter from Yamaha to Stanford; Stanford Special Collections Library



By early 1975, it’s likely that Stanford began to realize it had made a mistake by letting Chowning go. Not only did his patent show promise of making the university “tens of thousands of dollars a month,” but he had been swiftly snatched up by one of the biggest names in music to advise the development of the world’s most advanced digital music center in Paris.


With its tail between its legs, Stanford approached Chowning and extended an offer to return, this time as an research associate. Chowning agreed.


Back in his old office, Chowning wasted no time in redefining the school’s music department. With a small team of colleagues — John Grey, James Moorer, Loren Rush and his old adviser Leland Smith (then busy developing the SCORE music publishing program) — he proceeded to found the Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”). The goal of CCRMA was simple: “it was a coherent group of people working together toward the synthesis, processing, spatialization and representation of sound.” It was also, by all accounts, the first program of its kind in the United States.


But really, says Chowning, CCRMA was a manifestation and extension of the ideas and projects he’d started more than a decade before, as a graduate student. Except there was a difference: while his efforts had once been disregarded and underappreciated, they were suddenly of the utmost importance to Stanford and its new patent licensee.


How Yamaha and Chowning Built the Modern Synthesizer


YamahaDX7.jpg



A Yamaha DX7: the synth that changed music forever



Since licensing FM synthesis in 1974, Yamaha had been in constant communication with Chowning and Stanford’s Office of Technology Licensing. 


For years, analog synthesizer instruments had ruled the market. But they came with their fair share of shortcomings (or unique quirks, depending on who you talk to). From the 1950s to the early 1960s, electronic instruments were limited by their dependance on magnetic tape as a means of producing and recording sounds. Even newer developments in the mid-1960s, like the Moog or the Mellotron, were fickle: notes sounded different each time they were played, and there were often variations in pitch and amplitude. Most woefully, each individual note played was recorded “in isolation” — that is, few instruments had the ability to polyphonically produce sound.


Efforts to digitize the synthesizer had been made in previous decades, but were thwarted by the gargantuan size of computers and memory cards, and the fact that it took up to 30 minutes just to hammer out a few measures of music. But with semiconductor technology improving rapidly by the mid-1970s, it became feasible that Chowning’s FM synthesis technology could fit on a reasonably-sized computer chip.


Working with Chowning’s technology in 1974, Yamaha successfully produced its first prototype, a machine called MAD. Though it was just a proof, and nowhere near a product that would appear on the market, Chowning saw the team’s potential.


“It was clear right away that Yamaha’s engineers were masterful,” he says. “They put together these instruments quickly, and made great strides on my work. it was a very good relationship.”


>>>>PART4


  • Like 1
Отговорено

PART4

Over three years of licensing FM synthesis, Yamaha, in tandem with Chowning, gradually improved the realistic qualities of their “timbres,” or sound effects. This level of commitment is evident in an excerpt from a July 1975 letter from Yamaha to Chowning:

devofsounds.png

Yamaha’s progress update on the synth tones; Stanford Special Collections Library

welcometojapan.png

Some friendly correspondence: Yamaha welcoming Chowning to Japan for the first time; Stanford Special Collections Library

While Yamaha tinkered with digital synthesizer technology, they made great strides with their analogue synths. In 1975 and 1976, the company released two machines — the GX1, and the CS80 — in limited runs of 10 units. Despite their $50,000 price tags , they were snatched up by the likes of  Keith Emerson, John Paul Jones (Led Zeppelin), Stevie Wonder, and ABBA, and were lauded as “Japan’s first great synths.”

As Yamaha got closer to creating the first digital synthesizer in 1977, Stanford’s application for the FM synthesis patent was finally approved. By then, Yamaha was fully invested in the belief that the technology could make them millions of dollars, and they negotiated a licensing agreement with Stanford, securing them rights to the technology for the next 17 years, until 1994.

patent.png

One of 17 schematics submitted in the FM Synthesis 
patent application
 (submitted in 1974, and approved in 1977)

But during the next several years, the company hit a number of roadblocks in rolling out their digital technology. “Yamaha,” wrote English music magazine, SOS, “is floating into the backwaters of the professional keyboard world.”

Somewhere in this lull, a tiny Vermont-based synthesizer company aptly named the New England Digital Corporation beat Yamaha to the punch by producing the world’s first digital synthesizer, the “Synclavier.” Though only 20 units were sold at $41,685 each, and they were all reserved for top-notch musicians, Stanford took no chances, and swiftly sued the company for infringing on its FM synthesis patent. From that point forward, the university received a sum of $43 every time a Synclavier was sold.

In 1981, Yamaha finally succeeded in integrating Chowning’s FM synthesis into an instrument. Like their previous synths, the GS-1 and GS-2 were ridiculously expensive — some $15,000 each — and were only produced in limited runs for world-famous performers like Toto. But the instruments were universally touted as great sounding, which boded well for Yamaha.

GS1-ad.png

A GS-1 ad from 1981; they retailed for around $15,000 each

“Yamaha always seemed to make the first product a high-end product,” says Chowning. “They tried to set the highest audio standard at the onset, so that the following products would have a good name, the existing tech will be easier to roll out.”

According to Chowning’s premonition, Yamaha would release a more accessible synth two years later — and it would “turn the music world upside down.”

***

 

 

  • Like 1
Guest
Темата е заключена и Вие нямате право да коментирате в нея.
×
×
  • Създай нов...

Важна информация!

Поставихме "бисквитки" на вашето устройство, за да направим този сайт по-добър. Можете да коригирате настройките си за "бисквитките" , в противен случай ще предположим, че сте съгласни с тяхното използване.