-
Мнения
3863 -
Присъединил/а се
-
Последно посещение
-
Топ дни
324
Тип съдържание
Профил
Форуми
Календар
Всичко публикувано от Parni_Valjak
-
Поздрав към всички Българи, останали такива. https://youtu.be/v2K4XUaczME
-
When the new Billboard charts came out on April 4, 1964, The Beatles broke all American chart records when they had the Top Five records in the country simultaneously (#5: "Please Please Me," #4: "I Want To Hold Your Hand," #3: "She Loves You," #2: "Twist And Shout," #1: "Can't Buy Me Love"). Plus they had nine other singles scattered in various other positions around the "Hot 100." How many of you still have Beatles' 45 records? And how many of you have the original picture sleeves? The British Invasion was in full swing 51 years ago today.
-
3-day London event will mark the life of iconic synth creator A performance by Keith Emerson's Three Fates Project will bring the curtain down on a celebration of the life of Dr Robert Moog. The Moog Concordance event – to mark the 10th anniversary of Moog's passing – will be held at London's Barbican Centre from July 8-10 and will also include a performance from The Will Gregory Moog Ensemble, featuring Portishead’s Adrian Utley and composer Graham Fitkin. Dr Moog unveiled the Moog modular synthesizer more than 50 years ago and it had a massive impact on music and instrument design. The Three Fates Project show sees Emerson and his band appear alongside the BBC Concert Orchestra. It features Emerson's Moog solos and is billed as more than just a rock-meets-orchestra performance. The project’s title is taken from a track from the 1970 debut album of prog rock ensemble Emerson, Lake & Palmer who were a favourite band of Bob Moog and who brought his synthesiser sound to some of the largest live audiences of the 70s. The show's co-producer Paul Smith says: "The Moog Concordance events at The Barbican are the first jewels in the crown of what is going to prove a very exciting, year-long UK celebration of Bob Moog's life and work, and believe me there is plenty more to come and we'll be announcing more about all this in the coming weeks. "Of course you really can't have a proper Moog celebration without including a seriously proper prog gig. So I was absolutely delighted when Keith Emerson kindly agreed to perform, and it looks very much like we'll be shipping over his original historic tour Moog – the one with rocket launcher – soon. "Perhaps we should work out somewhere for it to go on public display as I'm sure there are fans who would love to able to see it close up. "And Mr Rick Wakeman, if you are reading this, we would still really love to have you involved in the year in some manner too if you have time." Last year, Emerson recalled how he tried to convince Robert Moog to give him a giant synthesiser for free. The Will Gregory Moog Ensemble & Charlemagne Palestine play at the event on July 8 while Suicide: A Punk Music Mass takes place on July 9.
-
- 2
-
За основа е послужила ПОБЕДА / ВАРШАВА М20 На такава Варшава се научих да карам... от 8 годишен (1962г)
- 5301 replies
-
- 10
-
Един справочник, издание на Texas Instruments, компания, която не се нуждае от обяснения. Не е всеобхватен, но е съвременен, качествен и много полезен.
-
Capturing The Moment: Microphone Techniques For Live Recording A variety of ways to capture the performance so it can be brought back alive... March 30, 2015, by Bruce Bartlett Perhaps the most exciting type of recording comes in the live realm, whether it be in a club or concert hall or stadium. Many musicians and bands want to record live because they feel that’s when they play best. The goal, then, is to capture the performance so it can be brought back alive. Remote recording is exhilarating. The musicians - excited by the audience - often put on a stellar performance. Usually you only get one chance to get it recorded, and it must be done right. It’s on the edge, but by the end of the night, especially if everything has gone as planned - what a great feeling! Challenges abound. The monitors can feed back and/or leak into the vocal microphones, coloring the sound. The bass sound can leak into the drum mics, and the drums can leak into the piano mics. Then there are other mic-related gremlins - breath pops, lighting buzzes, wireless system glitches, and even electric shocks. How to get around the potential problems? Let’s have a look at some effective mic techniques that work well when recording in the live realm. And note that these are tailored more to “pop” music performances. - When using directional mics, position them close to the source. Close mic’ing increases the sound level at the mic, so less gain is needed, which in turn cuts background noise and leakage. Unidirectional mics (cardioid, supercardioid, hypercardioid) do the same thing by attenuating off-axis sounds. Also, their proximity effect boosts the bass up close, without boosting the bass of distant sounds. - Use direct boxes and guitar pickups to eliminate leakage. Or use pickups mixed with mics. - Consider using headworn noise-canceling mics on vocals. A noise-canceling or differential mic is designed to cancel sounds at a distance, such as instruments on stage or monitor loudspeakers. Such a mic provides outstanding gain-before-feedback and isolation. The mic must be used with lips touching the foam windscreen; otherwise the voice is cancelled. - Use wireless mics correctly. If dropouts can be heard, move the wireless receiver (or remote antennas) closer or to a point where a stronger signal can be realized. If distortion occurs with loud yelling, turn down the gain-trim pot in the mic. - Prevent hum and buzz. Keep mic cables well separated from lighting and power cables. If the cables must cross, do so at right angles to reduce the coupling between them, and separate them vertically. If hum pickup is severe with dynamic microphones, use dynamic microphones with humbucking coils built in. Routinely check the microphone cables to make sure the shield is connected at both ends. For outdoor work, tape over cracks between connectors to keep out dust and rain. - Prevent electric guitar “shocks.” There may be a ground-potential difference between the electric guitar strings and the sound system mics, causing shocks when both are touched. It helps to power all instrument amps and audio gear from the same AC distribution outlets. That is, run a heavy extension cord from a stage outlet back to the mixing console (or vice versa). Plug all the power-cord ground pins into grounded outlets. This prevents shocks and hum at the same time. Further, try putting a foam windscreen on each vocal mic to insulate the guitarist from shocks. As a bonus, a foam windscreen suppresses breath pops better than a metal grille screen. If you’re picking up the electric guitar direct, use a transformer-isolated direct box and set the ground-lift switch to the position with the least hum. - Try mini mics and clip-on holders. Nearly all microphone manufacturers offer miniature condenser models. These tiny units sometimes offer the sound quality of larger studio mics. If clipped on musical instruments, they reduce clutter on stage by eliminating boom stands. Plus, the performer can move freely around the stage. And because a miniature clip-on mic is very close to its instrument, it picks up a high sound level. Often, an omni mic can be used without feedback. Note that “omni’s” generally have a wider, smoother response than “uni’s” and pick up less mechanical vibration. Clutter can also be lessened even when using regular-size mics by mounting them in mic holders that clip on drum rims and mic stands. Specific Techniques As always, there is no one “right” way to mic an instrument. The suggestions here are techniques that have been proven to work, but never hesitate to use what feels best for your situation. Vocal. Cardioid dynamic or condenser handheld mic, maybe with a presence peak around 5 kHz, and always with a foam windscreen to reduce breath pops. Lips should touch the foam for best isolation. Aim the rear of the mic at floor monitors to reduce monitor pickup and feedback. Use a 100 Hz low-cut filter and some low-frequency roll-off to reduce pops and to compensate for proximity effect. Acoustic guitar. Consider using a cardioid condenser on guitar, between the sound hole and 12th fret, a few inches away. Roll off excess bass. Aim the mic downward to pick up less vocal. Other approaches include using a direct box on the guitar pickup and placing a mini mic near the bottom edge of the sound hole. Roll off excess bass. (Figure 1) Figure 1: Some acoustic-guitar mic’ing techniques. Saxophone. Mount a shock-mounted cardioid on the instrument bell. Or, try a mini omni or cardioid condenser mic clipped to top of bell, picking up both the bell and tone holes a few inches away. (Figure 2) Figure 2: Mobile techniques for saxophone. Electric guitar. To add some guitar-amp distortion, mic the amp about an inch from its speaker cone, slightly off center, with a cardioid dynamic mic. A leakage-free alternative is to use a direct box and process the signal during mixdown through a guitar-amp modeling processor or plug-in. Electric bass, synth, drum machine. Go with a direct box. Leslie organ speaker. Cardioid dynamic mic with a presence peak a few inches from the top louvers. Add another mic on the lower bass speaker. Drum set (toms and snare). Cardioid dynamic mic with a presence peak, or a clip-on cardioid condenser mic, about 1 inch above the head, 1 inch to 2 inches in from the rim, angled down about 45 degrees to the head. Drum set (cymbals). Using one or two boom stands, place cardioid condenser mics (flat or rising high-frequency response) 2 feet to 3 feet over the cymbals. The mics can be spaced 2 feet to 3 feet apart, or mounted “XY” style for mono-compatible recording. A stereo mic can also be used effectively. (Figure 3) Figure 3: A strategy for mic’ing all parts of a drum set. Drum set (kick drum). Remove the front head or go inside the hole cut in the front head. Inside, on the bottom of the shell, place a pillow or blanket pressing against the beater head. This dampens the decay portion of the kick-drum’s envelope and tightens the beat. Place a cardioid dynamic mic with a presence peak and a deep low-frequency response inside a few inches from the beater. For extra attack or click, use a wooden beater and/or boost EQ around 3 kHz to 6 kHz. Cut a few dB around 400 Hz to remove the papery sound. Drum set (simple miking). For jazz or blues, sometimes you can mic the drum set with one or two condensers (or a stereo mic) overhead, and another mic in (or in front of) the kick. Note that there may be a need to mix in another mic near the snare drum. As an alternative, clip a mini omni mic to the snare-drum rim, in the center of the set, about 4 inches above the snare drum. With a little bass and treble boost, the sound can be surprisingly good. Put another mic in the kick. Metal percussion. Use a flat condenser mic about 1 foot away. Bongos or congas. Place a cardioid dynamic near each drum head. Grand piano. Tape a mini mic or boundary mic to the underside of the raised lid in the middle. For stereo, use two mics: one over the bass strings and one over the treble strings. And for more isolation, close the lid and tweak EQ to remove the tubby coloration (usually cut around 125 Hz to 300 Hz). Or, raise or remove the lid. Place two flat condenser mics 8 inches over the bass and treble strings, about 8 inches horizontally from the hammers, aiming at them. One other approach is to put the bass mic about 2 feet nearer the tail, aiming at the sound board. (Figure 4) Figure 4: Not one, but two piano-miking methods! Upright piano. Use two cardioid mics facing the sound board, a few inches away, dividing the piano in thirds. Banjo. Tape a mini omni mic to the drum head about 2 inches in from the rim, or on the bridge. Or, place a flat-response condenser or dynamic mic 6 inches from the drum head, either centered or near the edge.Xylophone or marimba. Deploy two flat-response condensers 18 inches above the instrument and 2 feet apart. Fiddle/violin. Mini omni mic. Put a small foam windscreen on the cable 1.5 inches behind the mic head. Stuff the foam in the tailpiece so the mic head “floats” between the tailpiece and bridge. Another approach is to use a cardioid dynamic or condenser mic about 6 inches over the bridge. Mandolin, bouzouki, dobro, lap dulcimer. Flat-response cardioid condenser about 6 to 8 inches away from a sound hole is often the best option. Another option: wrap a cardioid dynamic mic in foam and stuff it in the tailpiece aiming up. Cut EQ around 700 Hz for tailpiece miking. Acoustic bass. Try a flat-response cardioid a few inches out front, even with the bridge. Or, tape a mini mic near an f-hole and roll off excess bass. (Figure 5) Figure 5: Three ways to handle that pesky acoustic bass. Brass instruments. Place a ribbon or cardioid dynamic about 8 inches from the bell. Woodwind instruments. Use a flat-response cardioid condenser placed 8 inches from the side - not in the bell. Flute. Try a cardioid mic near mouthpiece, and using a foam pop filter. Or, use a mini omni clipped on the instrument, resting about 1.5 inches above the zone between mouthpiece and tone holes. Harmonica. A very closely placed or handheld cardioid dynamic mic is usually the way to go. Accordion, concertina. Employ a cardioid about 8 inches from the tone holes on the piano-keyboard side. Mini omni mic taped near tone holes on the opposite side (because it moves). Audience. This is an interesting one! It can be done with two spaced cardioids on the front edge of the stage aiming at the back row of the audience. (Figure 6) Figure 6: Get the audience into the action! Another way is to use two spaced cardioids hanging over the front row of the audience, aiming at the back row. Or, try two mics at front-of-house (FOH). To prevent an echo between the stage mics and FOH mics, mix the on-stage mics to stereo, then delay that stereo mix relative to the FOH audience mics until their signals align in time. Keep in mind that each of these techniques involves some compromises in order to fight background noise and leakage, but with some careful EQ, they can put you well on the way to a quality recording. AES and SynAudCon member Bruce Bartlett is a recording engineer, microphone engineer and audio journalist.
-
- 2
-
Графиките показват хода на импеданса. Първата е без 47-те пикофарада в гейта на транзистора, а втората с него. В честотния обхват на инструментите няма проблем с входния импеданс. И е нормално, при JFET входният импеданс се дефинира от системата за преднапрежение. За по-високи честоти влияят паразитните и входни капацитети, обратната връзка дрейн - гейт и т.н. , но до 4-5 KHz имат малко влияние. N.B. Не се подвеждай по мащабите на графиките! И не забравяй, че с висок импеданс има един осезаем максимум в характеристиката от индуктивността на адаптерите и паралелния капацитет...
-
Ало, джазмените! Къде така? ще видите вие, кон боб яде ли !
-
С премахването им се намалява коефициентът на усилване, ерго - намалява се шума. Това обаче се отнася за шума на самата схема. Ако по някакви съображения се наложи увеличаване на общото усилване от предходни и последващи устройства, може шумът да се увеличи, но от другите устройства. В общия случай не се очаква увеличение на шума. ПП. Ако във входа на това устройство се вкарва директно китара/бас с пасивни адаптери, наистина ще е по-добре да се смени 100К потенциометъра с 1 М . Така именно високите от инструмента биха били по-добри, спрямо сегашния вариант. НО! Кабелът от инструмента до устройството трябва да е максимално къс, не повече от 3 метра и то качествен, с нисък разпределен капацитет. За съжаление не е лесно да се подбере качествен кабел, защото уж качествените често имат по-висок капацитет, т.е. намаляват високите.
-
Най-напред елиминираш 50uF във сорса на втория (последния) транзистор. Ако пак има проблем, премахваш и другия 50uF, в първото стъпало.
-
Такава операция може да направи само човек с много опит. Има обаче голяма вероятност да не може да се осъществи технологически, защото ще трябва да се лепи магнита със специални лепила, а може да се наложи и пренамагнитване... Единствено специалист може да се произнесе по въпроса. Във форума има тема за ремонт на говорители.
-
Че грешиш е повече от ясно. Изместването на магнита (ако това е вярно) е резултат на изпускане на монитора, или някакъв механичен удар. Предполага се , че е станало при пренасянето/ транспортирането, освен ако повредата не е била още при собственика, и самата идея на продаването да е била да се натовари следващия по веригата. Естествено това в случая е митко5000 ...
-
Прекрасна сбирка от знания!
-
Аз лично съм измервал 8 волта амплитудна стойност от Gibson. Усилвател, който правих, не можеше да изкара чист звук от чист канал! Като крайна мярка се наложи да изследвам първопричината, след сума загубено време в търсене на проблема там, където не беше. Естествено и аз бях учуден, как е възможно. Но фактът си е факт! Съвсем друг е въпросът как се справих, но информацията за сигнала беше изключително важна.
-
За интересуващите се от историята на цифровия синтезатор, може да прегледат тази тема в Техническа Литература. The Father of the Digital Synthesizer
-
PART6 Inside CCRMA’s “Listening Room” Whereas the music department had once ousted Chowning for his “strange” concepts, it now embraced him as a cult hero: by the mid-1980s, nearly every graduate student composer at Stanford was making use of a computer. Chowning had brought Stanford more than a steady stream of revenue; he’d brought them prestige. In the midst of synthesizer developments in 1977, Chowning had traveled back to Paris’s IRCAM — by then, the world’s biggest digital music research center — and presented a new work, “Stria.” The piece was groundbreaking in many ways. Written using SAIL, a new language developed by Stanford’s Artificial Intelligence Lab, it was among the first compositions to be fully generated by a computer, and to integrate elements from the golden mean and the Fibonacci sequence. If Chowning had any doubters at this point, they certainly didn’t make themselves known. In the mid-1980s, Chowning’s FM chips were implemented as a soundcard component in PCs, phones, and consoles; Atari, NEC, Fujitsu, Sharp, and Sega all enlisted the technology to produce MIDI sounds. The patent was making more money than ever before — and at the height of this second wave, Stanford and Yamaha signed a new royalty agreement: 1.5% of the sale of each instrument would be collected by the OTL, up from .5%. Records show that Stanford raked in FM patent royalties of $1.56 million for the 1986-7 fiscal year; by 1992, this number was $2.7 million — at that time, the second most lucrative patent at Stanford, trailing only genetic engineering. The FM chip, which Yamaha had developed with Chowning, was selling some 736,000 pieces per year. One Canadian company alone was purchasing 20,000 of them a month for use in “black boxes with MIDI inputs.” An FM chip; Stanford Special Collections Library By 1989, Julius Orion Smith, one of Chowning’s younger colleagues, had developed another form of synthesis based around physical modeling, or the use of mathematical models to simulate sound. Yamaha licensed this technology as well, and soon, the Stanford-Yamaha partnership was so strong that they decided to “pool their portfolio of patents” and formSondius-XG, a mutually-beneficial investment group that brought both parties considerable wealth over the years. Throughout all of this, Chowning remained level-headed, if not disinterested. “I never had the idea that this was going to be a big money-maker for Stanford,” he says. “My interest was never in patenting; my interest was in using it in my compositions, my own music.” When he retired in 1996, Chowning had enough influence to secure two professorships in his program — one for Julius O. Smith, and the other for Chris Chafe. In later years, Chafe took the reigns from Chowning as the center’s director, expanding its disciplines to include composition, electrical engineering, computer science and neuroscience. “He’s added three additional professors to the faculty, seven consulting professors, and two staff/lecturer positions — not an easy task,” Chowning relates. “The teaching/research program has grown by a factor of three.” *** Today, John Chowning is regarded as one of electronic music’s great pioneers — one who braved the critical gaze to realize his dreams and change the landscape of sound. But Chowning doesn’t put much stock in his technological breakthroughs: he’s a composer at heart, a man interested more in explorative composition than prestige. “I don’t think of myself as an inventor,” he admits, scanning across the Stanford hills from one of CCRMA’s studios. “My head space is a composer: the inventions were simply a result of a compositional search, a musical idea.” But it’s still hard to ignore the impact of Chowning’s accomplishments. Nearly forty years after its issuance, his FM synthesis patent has made Stanford a cool $25 million. It has, along with Chowning’s creative spirit, led to the creation of one of the finest digital music centers in the world — one that designs the sound structure of opera houses, and does “acoustic archeology” on pre-Incan ruins. Above all else, Chowning loves to learn. Occasionally, the 80-year-old will sit down with his 28-year-old son, a classical French horn player, and study “club” music. “I listen to him when he tells me to listen to music,” he says. “Glitch music, that’s great stuff. Good subwoofer pumps. You’re never too old to explore sounds.” It’s a mantra he’s taken to heart. In his youth, Chowning would hike into the mountains to yell in caves and listen to the echoes; all these years later, he hasn’t lost touch with the world’s acoustic wonders. “There’s an underground tunnel in Palo Alto where kids ride their bicycles and shout,” he says. “Adults don’t do it because they’re too self-conscious — but every time I pass through, I can’t help myself. You can only do it once, because people get agitated. It sounds like music to me.” This post was written by Zachary Crockett.
-
PART5 In a 1981 progress report to Chowning, Yamaha’s Advanced Development Division made its intentions clear: “To [achieve] better sales, our next model should not be an improvement of the electronic piano, but rather be a new attractive type of keyboard instrument.” Instead of emulating existing instruments, the company would create and market a distinctly new one. When Yamaha engineers released the “DX7” in May of 1983, they knew they had done just that. The keyboard fully integrated all of the capabilities of Chowning’s FM synthesis on a small Intel 3000 chip, was sleek, and, unlike its predecessors, was more within reach to the general public. At a price point of $1,995 USD, the DX7 featured 16-note polyphony (meaning that 16 keys could be played simultaneously), and allowed the user to program up to 32 of his own custom sounds. Though it was mass produced for a wider audience, its sound quality earned it high praise from a number of famous musicians: Elton John, Stevie Wonder, Queen, U2, Phil Collins, Kraftwerk, Talking Heads, Enya, Brian Eno, Yes, Supertramp, Steve Winwood, Depeche Mode, The Cure, Toto, Michael McDonald, Chick Corea, Lynyrd Skynyrd, Beastie Boys, and Herbie Hancock included. A vintage ad for the DX7, featuring testimonials from Elton John, Chick Corea, and Michael McDonald, among others (1983) From its onset, the DX7 was a massive success — one that both Stanford and Yamaha minted money from. Royalty records in the Stanford archives show that from May 1983 to October 1983 alone, Yamaha raked in some $39,913,067 (roughly $92,868,000 in 2015 dollars) in DX7 sales. As per the original licensing agreement, Stanford received .5% of the grand total, or $199,565 ($468,270). And this was just from the product’s first six months on the market. In the next six-month cycle, from November 1983 to April 1984, Stanford’s royalty increased to $287,500 ($674,606). A December 18, 1984 letter Stanford to Yamaha reveals that the school was making substantially more than $1 million per year off of the FM synthesis patent and its relationship with Yamaha. Revenue reports from Yamaha; Stanford Special Collections Library For a brief time — before the meteoric rise of genetic engineering and the Internet — FM synthesis was Stanford’s highest-grossing patent in the school’s history. The synthesizer exploded in popularity in a variety of markets (the U.S, Japan, Great Britain, and France) and continued to bring in substantial profits through its discontinuation in 1986. Yamaha’s market share for pianos in Japan jumped from 40% in 1980 to more than 65% by 1985; its closest rival, Kawai, trailed far behind at 22%. The company, once focused on diversification, now intensely drove its synthesizer production, producing some 1,000 electronic organs per day. The DX7 became the world’s best-selling synthesizer, and demandes were so high that it had a two-year backorder. “I’m sure there are many people who wish I didn’t exist at the time,” laughs Chowning. “So many competing companies had to close down — they simply couldn’t compete with the DX7.” The Prince of CCRMA Chowning with Max Mathews (c.1990) Though the DX7’s success paid off the most for Yamaha and Stanford, Chowning was duly rewarded for his role. Dismissed from the staff just a few years before, he was now given full tenure with a “healthy” professor’s salary. “I'm probably the only person in the history of Stanford, or most universities, who, within the same university, went from assistant professor to full professor without being appointed associate professor,” says Chowning. “In a way, that’s vindication.” Stanford’s patent royalty distribution worked as follows: 15% would be taken off the top for the OTL’s “tech budget,” out of pocket expenses would be deducted, and the rest would be distributed to Stanford’s general fund, the inventor’s department, and the inventor himself in “equal sums.” Though the discovery wasn’t as lucrative for Chowning as it was for Stanford, the professor reaped rewards in other forms. Records in Stanford’s special collections library show that from 1984 to the early 1990s, he was making as much as $600 per day consulting Yamaha on its technologies. What’s more, his department, CCRMA, was moved to what Chowning calls “the most beautiful building on campus” — a princely, 100-year-old Spanish Gothic building perched atop a hill, and once inhabited by the university’s president. Of course, Yamaha decked it out with more than $18,000 of instruments and music equipment. The "Knoll," damaged in the Loma-Prieta earthquake of 1989 and restored in 2005, is home to John Chowning’s CCRMA department. >>>>PART6
-
PART4 Over three years of licensing FM synthesis, Yamaha, in tandem with Chowning, gradually improved the realistic qualities of their “timbres,” or sound effects. This level of commitment is evident in an excerpt from a July 1975 letter from Yamaha to Chowning: Yamaha’s progress update on the synth tones; Stanford Special Collections Library Some friendly correspondence: Yamaha welcoming Chowning to Japan for the first time; Stanford Special Collections Library While Yamaha tinkered with digital synthesizer technology, they made great strides with their analogue synths. In 1975 and 1976, the company released two machines — the GX1, and the CS80 — in limited runs of 10 units. Despite their $50,000 price tags , they were snatched up by the likes of Keith Emerson, John Paul Jones (Led Zeppelin), Stevie Wonder, and ABBA, and were lauded as “Japan’s first great synths.” As Yamaha got closer to creating the first digital synthesizer in 1977, Stanford’s application for the FM synthesis patent was finally approved. By then, Yamaha was fully invested in the belief that the technology could make them millions of dollars, and they negotiated a licensing agreement with Stanford, securing them rights to the technology for the next 17 years, until 1994. One of 17 schematics submitted in the FM Synthesis patent application (submitted in 1974, and approved in 1977) But during the next several years, the company hit a number of roadblocks in rolling out their digital technology. “Yamaha,” wrote English music magazine, SOS, “is floating into the backwaters of the professional keyboard world.” Somewhere in this lull, a tiny Vermont-based synthesizer company aptly named the New England Digital Corporation beat Yamaha to the punch by producing the world’s first digital synthesizer, the “Synclavier.” Though only 20 units were sold at $41,685 each, and they were all reserved for top-notch musicians, Stanford took no chances, and swiftly sued the company for infringing on its FM synthesis patent. From that point forward, the university received a sum of $43 every time a Synclavier was sold. In 1981, Yamaha finally succeeded in integrating Chowning’s FM synthesis into an instrument. Like their previous synths, the GS-1 and GS-2 were ridiculously expensive — some $15,000 each — and were only produced in limited runs for world-famous performers like Toto. But the instruments were universally touted as great sounding, which boded well for Yamaha. A GS-1 ad from 1981; they retailed for around $15,000 each “Yamaha always seemed to make the first product a high-end product,” says Chowning. “They tried to set the highest audio standard at the onset, so that the following products would have a good name, the existing tech will be easier to roll out.” According to Chowning’s premonition, Yamaha would release a more accessible synth two years later — and it would “turn the music world upside down.” ***
-
PART3 Dr. Chowning’s Stanford Experience By 1970, Chowning had succeeded in using FM synthesis to mimic various tones — drum sounds, vocals, brass — in a somewhat rudimentary form. When he passed his finding along to Max Mathews, the Bell Labs scientist he’d been inspired by, Mathews “fully understood” the importance of the discovery; with the endorsement of John Pierce, then-director of research at Bell Labs, Mathews suggested that Chowning apply for a patent. In those days, innovative Stanford professors had a choice: they could develop their inventions independently, and assume all of their own financial and legal risk, or they could sign it over to Stanford’s Office of Technology Licensing (OTL). The OTL, which Stanford had just created, was the safer bet for Chowning: it would absorb all of the risk, bear the monetary burden of trying to license the technology (some $30-40,000), and would allow him to be involved in the process. The downside was that the university would get the lion’s share of the income from the patent, and would only pass a small sum on to the inventor himself. “I didn’t want to deal with lawyers — I wanted to do my music,” defends Chowning. “I didn’t care about the money as much as I cared for my compositions. It was natural for me to say, ‘Please take it.’” For a transactional fee of $1, Chowning signed the patent for FM synthesis over to Stanford’s OTL, which then began the long process of courting instrument companies. Meanwhile, Chowning bunkered down and focused on integrating his discovery into his own music. Sabelithe, a piece which he’d begun in 1966, but completed in 1971, was the first composition to ever feature FM synthesis. If you listen closely between the 4:50 and 5:10 mark, you’ll hear a drum-like tone gradually transmogrify into a trumpet sound: https://youtu.be/5MBRF8Mqj0s A year later, he presented Turenas, the first electronic composition to feature the “illusion of sounds” moving in a 360-degree space. “Normally when a composer puts up a work, there are a couple string quartets , a symphony or two, chamber music,” says Chowning. “I had very little — just a computer and some speakers.” Though Chowning remembers the day of the composition’s premiere as “the realization of [his] dreams for many years,” his contemporaries weren’t as smitten. Years before, as a young student in Paris, Chowning had witnessed a room full of traditional composers boo an electronic musician’s performance; now, he was the man being scrutinized, and though the attendees weren’t the rowdy type, they exhibited the same breed of non-acceptance for what this new music. “What I was hearing in Turenas was not what the composers who were asked to come evaluate my work were hearing," he says. "What sense does sound moving in space make to people used to orchestral, traditional sounds? Universities are conservative — they move, but slowly — and it’s not popular to deviate from tradition.” https://youtu.be/kSbTOB5ft5c Despite his important output in the early 1970s (which also included an extensive academic paper mapping out FM synthesis and its exciting implications), Chowning soon found himself on the chopping block. Seven years into his role as an assistant professor, it was time for him to take his sabbatical. “During this time, Stanford either promotes you or tells you to find work elsewhere,” says Chowning. “While I was on sabbatical, I was told I would not be teaching at Stanford anymore.” Though a reason for this decision was never provided, Chowning admits that the copious time he spent “developing a computer music system” was not perceived as a value add by the institution. At first, this was devastating news for the young academic: As if on cue, the famous French composer Pierre Boulez contacted Chowning and offered him an advisory position in Paris. Boulez was, at the time, “a major figure in music” — the conductor for both the New York Philharmonic, and the BBC Orchestra — and had just been commissioned by France’s prime minister to create a national institution for experimental music research. It was his hope that the resulting project, IRCAM, would be in the interest of Chowning. “It was a big problem when I was let go from Stanford. I had a young family and I had to figure out how I was going to support them. But I still felt like I had to pursue what I started. It wasnt worth giving up. I was in a digital world, but the whole process was intensely musical. Programming and creating sounds, and figuring out how the two relate — all of that was, for me, the point.” Chowning jumped at the opportunity, and soon found himself back in Paris, assisting in the development of the program. *** As Chowning revelled in his new role, Stanford struggled to lease the FM synthesis patent. They approached all of the major organ companies, but none of them had the technical ability to grasp the discovery’s implications. Chowning’s discovery was ahead of the industry’s learning curve. “Hammond, Wurlitzer, Lowry — they’d come around take a listen, and say, ‘That sounds good!’” recalls Chowning, who learned of these interactions later on. “They understood the abstract, but had no knowledge of the digital domain or programming. All said no.” As a last resort, Stanford contacted Yamaha, a Japanese musical instrument manufacturer which, at the time, had already begun investigation into the digital domain for a distant future. And as luck would have it, the company gave the patent a chance. “Yamaha sent out a young engineer to Palo Alto,” Chowning says, “and in ten minutes, he understood [our technology].” In fact, Yamaha understood it so well that they decided to sign a 12-month licensing agreement and figure out if it was something that could be beneficial to them commercially. An early negotiation letter from Yamaha to Stanford; Stanford Special Collections Library By early 1975, it’s likely that Stanford began to realize it had made a mistake by letting Chowning go. Not only did his patent show promise of making the university “tens of thousands of dollars a month,” but he had been swiftly snatched up by one of the biggest names in music to advise the development of the world’s most advanced digital music center in Paris. With its tail between its legs, Stanford approached Chowning and extended an offer to return, this time as an research associate. Chowning agreed. Back in his old office, Chowning wasted no time in redefining the school’s music department. With a small team of colleagues — John Grey, James Moorer, Loren Rush and his old adviser Leland Smith (then busy developing the SCORE music publishing program) — he proceeded to found the Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”). The goal of CCRMA was simple: “it was a coherent group of people working together toward the synthesis, processing, spatialization and representation of sound.” It was also, by all accounts, the first program of its kind in the United States. But really, says Chowning, CCRMA was a manifestation and extension of the ideas and projects he’d started more than a decade before, as a graduate student. Except there was a difference: while his efforts had once been disregarded and underappreciated, they were suddenly of the utmost importance to Stanford and its new patent licensee. How Yamaha and Chowning Built the Modern Synthesizer A Yamaha DX7: the synth that changed music forever Since licensing FM synthesis in 1974, Yamaha had been in constant communication with Chowning and Stanford’s Office of Technology Licensing. For years, analog synthesizer instruments had ruled the market. But they came with their fair share of shortcomings (or unique quirks, depending on who you talk to). From the 1950s to the early 1960s, electronic instruments were limited by their dependance on magnetic tape as a means of producing and recording sounds. Even newer developments in the mid-1960s, like the Moog or the Mellotron, were fickle: notes sounded different each time they were played, and there were often variations in pitch and amplitude. Most woefully, each individual note played was recorded “in isolation” — that is, few instruments had the ability to polyphonically produce sound. Efforts to digitize the synthesizer had been made in previous decades, but were thwarted by the gargantuan size of computers and memory cards, and the fact that it took up to 30 minutes just to hammer out a few measures of music. But with semiconductor technology improving rapidly by the mid-1970s, it became feasible that Chowning’s FM synthesis technology could fit on a reasonably-sized computer chip. Working with Chowning’s technology in 1974, Yamaha successfully produced its first prototype, a machine called MAD. Though it was just a proof, and nowhere near a product that would appear on the market, Chowning saw the team’s potential. “It was clear right away that Yamaha’s engineers were masterful,” he says. “They put together these instruments quickly, and made great strides on my work. it was a very good relationship.” >>>>PART4
-
PART2 Though Chowning was a “mere” percussionist with no discernable electronics skill, he didn’t flounder. Instead, he decided to take advantage of Stanford’s “pretty good computers” by enrolling in a programming class. “I had to prove I could do it,” he says, “and it really wasn’t too difficult.” Using a bulky, but then-new Burroughs B-5500, he learned to code in ALGOL. A Burroughs B-5500 system: an early computer enlisted by Chowning for music By the Summer of 1964, he’d acquired some basic proficiency with the machine and language, and decided to journey to Murray Hill, New Jersey to meet Max Matthews, the man who’d written the paper that had inspired him. When Chowning arrived, Matthews was pleasantly surprised, and took the curious musician under his wing. Chowning recalls: “I told him that I wanted to use his program, ‘Music 4,’ and he gave me a big box of punch cards. Each represented a particular waveform — sinusoidal, triangular wave, and whatnot. Then, another card would tell computer how to connect these, and modulate frequencies. You could generate thousands and thousands of periods of a sine wave with just a couple of punch cards.” At the time, Stanford was far from the risk-taking institution it is lauded as today — and its music program was especially traditional and rigid. As Chowning had experienced in France, most of his college colleagues scoffed at the unfamiliar, foreign concepts of computer music. “It was against what the department said music was; they said I was dehumanizing music!” laughs Chowning. “My response was, ‘Perhaps it’s the humanization of computers.’” Despite the music department’s backlash, when Chowning returned to Stanford with Matthews’ box full of punch cards, he found a supportive mentor in composition professor Leland Smith: “[Leland Smith] was the one person in the music department who said, ‘Go ahead and tell me what you learn,’” recalls Chowning. “It was a very traditional department at the time, with a lot of interest in historical musicology and performance practice; what I was doing was pretty far from the central interest of the department, but Leland encouraged it.” Just as Chowning delved into the relatively unknown world of computed music, Smith took off to Europe for a one-year sabbatical. But before he left, he made Chowning promise him one thing: “Show me what you’ve learned when I return.” The Discovery of FM Synthesis Years before, when Chowning was studying in Paris, he had come across Bird in Space, a bronze sculpture by Romanian artist, Constantin Brâncuși. “I remember looking up at this beautiful, modernist work: simple elegant, but complicated,” he reminisces. “All the lines extended one’s eyes into a space that was absolutely gripping in its effect.” He was driven, in that moment, to “create spatial illusions” — to create sounds that would move like the lines in Bird in Space. In the late months of 1964, while Leland Smith was abroad, Chowning had an idea: he’d create a “continuum” to move sounds in a 360-degree space. At the time, not much was known about why or how sounds moved in space; Chowning’s first challenge was to create sounds that “not only had angular distribution, but also radial distance.” In order to achieve this, he realized that he’d need to fully immerse himself in acoustics, as well as the science behind auditory perception. For the next two years, the rogue musician hit the books. “I had to learn much about audio engineering and perception, psychoacoustics, and the cognition of sound (the difference between something loud and close, and loud and far),” says Chowning. “I did it all so that I could accomplish my composition goals.” *** By 1966, Chowning’s time as a Stanford Ph.D. student had come to a close. With strong references from his adviser, Leland Smith, and others in the department, he joined the staff as an assistant professor of composition. Increasingly, Chowning spent much of his free time in the “dungeons” of Stanford’s artificial intelligence lab, where he could access computers and analyze the properties of the sounds he was working with. It was here, late one night in the Autumn of 1967, that Chowning had an unintentional breakthrough: “I was experimenting with very rapid and deep vibrato [a music effect characterized by a rapid, pulsating change in pitch]. As I increased the vibrato in speed and depth, I realized I was no longer hearing instant pitch and time.” To put this in digestible terms, any given sound — say a B-note on a violin — has a distinctive timbre (identifying tone quality), and produces a sound wave. Sounds can be manipulated with certain effects, which are either natural (playing an instrument in a giant room provides reverberation and/or echo), or imposed (by manipulating a note on a violin, one can produce vibrato, or a wavering sound). The sound cards in Chowning’s possession were able to reproduce these effects digitally. Chowning found that by using two simple waveforms — one, the carrier, the frequency of which was modulated by the other — he could create a very rapid “vibrato” capable of producing complex, harmonic or inharmonic tones depending on the waveforms’ frequencies and the depth of the modulation. He called this “frequency modulation synthesis,” or FM synthesis. The sounds this method produced were entirely foreign: “I was aware that I was probably the first person to ever hear these sounds, that what I was hearing was something musical that had probably never been heard by anyone before — at least, not by anyone on this planet.” “It was 100% an ear discovery: I knew nothing about the math behind it,” adds Chowning. “It was a product of my musical training.” Though he instantly understood the gravity of what he’d discovered, Chowning realized that, in order to fully grasp its applications, he’d need to understand the math that was making these sounds possible. This was a problem: at 30 years of age, his last math course had been freshman algebra — and even then, he’d had to “beg for a passing grade.” But Chowning was driven forward by a deep-rooted desire to explore music, to reach into the mires of unexplored sound. And he had help: first from his “angel,” David W. Poole, an undergraduate math major who taught him how computers worked, and then from the scientists and engineers at Stanford’s Artificial Intelligence (AI) Lab. “There was excitement in the discovery,” he admits, “but the potential of using it in my music is what drove me — not discovery, or invention.” So, the music professor spent innumerable hours delving into math books, and consulting these researchers in the AI Lab. “Gradually,” he says, “I learned to understand the FM equation through programming and math, and learned that it had an importance not seen by many people.” The “importance” was that FM synthesis could be applied to produce highly accurate digital replications of real instruments. More boldly put, it could open up musicians to a new world of sound customization. But Chowning couldn’t neglect the main duties of his job. Like all professors in the music department, he was required to compose regularly, with the expectation that his work would receive peer recognition. He also had teaching duties to maintain, which consumed much of his time. While juggling his responsibilities, he spent the next four years working on FM synthesis, and replicating the sounds of various instruments. >>>> PART3
-
The Father of the Digital SynthesizerMar 23, 2015 “I was aware that I was probably the first person to ever hear these sounds, and that what I was hearing was something musical that had probably never been heard by anyone before — at least, not by anyone on this planet.” — John Chowning, Inventor of FM Synthesis Long before Stanford University was considered a technology powerhouse, its most lucrative patent came from an under-spoken composer in its music department. Over the course of two decades, his discovery, "frequency modulation synthesis," made the school more than $25 million in licensing fees. But more importantly, FM synthesis revolutionized the music industry, and opened up a world of digital sound possibilities. Yamaha used it to build the world’s first mass-marketed digital synthesizer — a device that defined the sound of 80s music. In later years, the technology found its way into the sound cards of nearly every video game console, cell phone, and personal computer. Despite the patent’s immense success, its discoverer, Dr. John Chowning, a brilliant composer in his own right, was passed over for tenure by Stanford for being “too out there.” In Stanford’s then-traditional music program, his dabblings in computer music were not seen as a worthy use of time, and he was largely marginalized. Yet by following his desire to explore new frontiers of audio, Chowning eventually recontextualized the roles of music and sound, found his way back into the program, and became the department chair of his own internationally-renowned program. This is the story of an auditory pioneer who was unwilling to compromise his curiosity — and who, with a small group of gifted colleagues, convinced the world that computers could play an important role in the creation of music. Echoes in Caves John Chowning in his youth John M. Chowning was born in the Autumn of 1934, just as New Jersey’s Northern Oak leaves were turning yellow. In the thralls of the Great Depression, the Chownings did everything they could to provide a modest household for their two sons and daughter — though it was a space “devoid of music.” Despite this, the young Chowning felt mystically drawn to sound. “I loved caves as a child,” he reminisces. “I would go hiking up in the Appalachian Mountains, go into these caves, and just listen to the echoes. They were mysterious, magical places.” It was the world’s natural acoustics that inspired Chowning to take up the violin at the age of eight, though the instrument never quite captivated him. After migrating to a Wilmington, Delaware high school in the 1940s, “a great music teacher” exposed him to his true love: percussion. Fully versed in reading sheet music, Chowning was initially enlisted to play the cymbals, but soon graduated to the drums. This became his “full-hearted passion,” and before long, he was a competent jazz percussionist. “My whole world,” he says, “was music, music, music.” Upon graduating, Chowning served in the Korean War and, due to his skill, was enlisted as a drummer in one of the U.S. Navy’s jazz bands. “There were so many good musicians there who really pushed me to perform — musicians who were, at the start, much better than I,” he admits. “There was a lot of pressure for me to get my chops up.” For three years, he toured Europe with the group, catering to the culture’s growing interest in the newer movements of jazz. He returned to the U.S. with a free college education on the GI bill, and attended Wittenberg University, a small liberal arts school in Ohio, at the behest of his father. Here, he immersed himself in the “contemporary composers” — Béla Bartók, Igor Stravinsky, Pierre Boulez, and the like. “I got very comfortable playing by myself,” he adds. “Improvisation was my path to new music.” A U.S. Navy Jazz band (c.mid-1940s) It was a path that would eventually lead him back to Europe. After graduating In 1959, Chowning wrote a letter to Nadia Boulanger, a famed French composer who taught many of the 20th century’s best musicians, and expressed his great interest in studying under her. Chowning was accepted, and he and his wife moved to France where he joined 40 other gifted students . For the next two years, he met with her once a week for intensive studies. “It was a very fruitful time for me,” he recalls. “I learned a lot about harmony and counterpoint.” But Chowning’s biggest revelation — the one that changed his life — came when he attended Le Domaine Musical, an “avant-garde” concert series in Paris that often featured experimental music: While Chowning sat awestruck, the rest of the attendees — mostly “traditionally-minded” musicians and students — “booed, screamed and whistled in disapproval after the piece was over.” Though electronic music was just emerging in Europe at the time, it wasn’t yet popular or widely accepted, especially among trained musicians. “Some of the music involved loudspeakers. Hearing what some of these composers were doing was life-changing — Karlheinz Stockhausen had a four-channel electronic piece involving [pre-recorded] boys’ voices. The spatial aspects caught my attention: that one could create, with loudspeakers, the illusion of a space that was not the real space in which we were listening.” In his next meeting with Boulanger, Chowning sheepishly admitted his interest in “loudspeaker music,” expecting to be ridiculed. Instead, the well-versed teacher encouraged him to pursue it. But as Chowning soon learned, electronic music was “highly dependant on special studios with technical knowhow.” Though he yearned to reproduce it and explore it, he did not have the financial or technical ability to do so. “When I heard this music in Paris, I thought, ‘I could create music if I could control those loudspeakers,’” recalls Chowning. “But it was disappointing to learn they were dependant on such high tech.” Stanford’s Misfit Composer With no way to pursue his newfound interest in electronic music, Chowning returned to the United States, enrolled in the music doctoral program at Stanford University, and became involved as a percussionist in the school’s symphony. Gradually, he began to resign himself to a more traditional repertoire. Then, in the winter of 1963, during Chowning’s second year of studies, a fellow percussionist handed him a page ripped out of Science magazine. Chowning hardly glanced at it before stuffing it in his pocket — but two weeks later, he rediscovered it and gave it a read. The article,“The Digital Computer as a Musical Instrument,” was written by a young scientist at Bell Laboratories named Max Mathews. At first, Chowning couldn’t really make sense of it. “I had never even seen a computer before, and I didn’t understand much,” admits Chowning. “But there were a couple of statements that got my attention — especially that a computer was capable of making any conceivable sound.” Schematics from Mathews’ paper; Stanford Special Collections Library The article was accompanied by several diagrams that were, at first, beyond Chowning’s comprehension, but his curiosity compelled him and he soon pieced together an understanding: “A computer would spit out numbers to a digital analog converter, which would then convert numbers into voltage proportionally. The voltage went to loudspeaker...It made me think back to all those big studios in Europe. I thought, ‘If I could learn to generate those numbers and get access to a computer and a loudspeaker, none of that expensive equipment would be required!’” >>>> PART2
-
Nikolay Karageorgiev - JTC Solo Contest 2015
topic отговори на Parni_Valjak's atanas_shishkov в Китара
Ме тоо! (46) -
mitko5000, Явно главата ти не увира... Ако щеш пак подскачай, ама опитите да се справиш сам или с помощта на некомпетентен приятел, няма да имат положителен резултат. Само ще докарате нещата до пълен банкрут! Ако започнете да човъркате по високоговорителите , най-много да изхвърлиш мониторите... Знания трябват момче! Само с мерак не става.
-
In The Studio: Mid-Side Microphone Recording Basics An incredibly useful method to attain ultimate control of the stereo field in your recordings... September 25, 2014, by Daniel Keller Courtesy of Universal Audio. When most people think of stereo recording, the first thing that comes to mind is a matched pair of microphones, arranged in a coincident (XY) pattern. It makes sense, of course, since that’s the closest way to replicate a real pair of human ears. But while XY microphone recording is the most obvious method, it’s not the only game in town. The Mid-Side (MS) microphone technique sounds a bit more complex, but it offers some dramatic advantages over standard coincident miking. If you’ve never heard of MS recording, or you’ve been afraid to try it, you’re missing a powerful secret weapon in your recording arsenal. More Than Meets The Ears Traditional XY recording mimics our own ears. Like human hearing, XY miking relies on the time delay of a sound arriving at one input milliseconds sooner than the other to localize a sound within a stereo field. It’s a fairly simple concept, and one that works well as long as both mics are closely matched and evenly spaced to obtain an accurate sonic image. One of the weaknesses of the XY microphone technique is the fact that you’re stuck with whatever you’ve recorded. There’s little flexibility for changing the stereo image once it’s been committed to disk or tape. In some cases, collapsing the tracks into mono can result in some phase cancellation. The MS technique gives you more control over the width of the stereo spread than other microphone recording techniques, and allows you to make adjustments at any time after the recording is finished. Mid-Side microphone recording is hardly a new concept. It was devised by EMI engineer Alan Blumlein, an early pioneer of stereophonic and surround sound. Blumlein patented the technique in 1933 and used it on some of the earliest stereophonic recordings. The MS microphone recording technique is used extensively in broadcast, largely because properly recorded MS tracks are always mono-compatible. MS is also a popular technique for studio and concert recording, and its convenience and flexibility make it a good choice for live recording as well. What You Need While XY recording requires a matched pair of microphones to create a consistent image, MS recording often uses two completely different mics, or uses similar microphones set to different pickup patterns. The “Mid” microphone is set up facing the center of the sound source. Typically, this mic would be a cardioid or hypercardioid pattern (although some variations of the technique use an omni or figure-8 pattern). The “Side” mic requirement is more stringent, in that it must be a figure-8 pattern. This mic is aimed 90 degrees off-axis from the sound source. Both mic capsules should be placed as closely as possible, typically one above the other. How It Works It’s not uncommon for musicians to be intimidated by the complexity of MS recording, and I’ve watched more than one person’s eyes glaze over at an explanation of it. But at its most basic, the MS technique is actually not all that complicated. The concept is that the Mid microphone acts as a center channel, while the Side microphone’s channel creates ambience and directionality by adding or subtracting information from either side. The Side mic’s figure-8 pattern, aimed at 90 degrees from the source, picks up ambient and reverberant sound coming from the sides of the sound stage. Since it’s a figure-8 pattern, the two sides are 180 degrees out of phase. In other words, a positive charge to one side of the mic’s diaphragm creates an equal negative charge to the other side. The front of the mic, which represents the plus (+) side, is usually pointed to the left of the sound stage, while the rear, or minus (-) side, is pointed to the right. Mid-Side recording signal flow. The signal from each microphone is then recorded to its own track. However, to hear a proper stereo image when listening to the recording, the tracks need to be matrixed and decoded. Although you have recorded only two channels of audio (the Mid and Side), the next step is to split the Side signal into two separate channels. This can be done either in your DAW software or hardware mixer by bringing the Side signal up on two channels and reversing the phase of one of them. Pan one side hard left, the other hard right. The resulting two channels represent exactly what both sides of your figure-8 Side mic were hearing. Now you’ve got three channels of recorded audio– the Mid center channel and two Side channels – which must be balanced to recreate a stereo image. (Here’s where it gets a little confusing, so hang on tight.) MS decoding works by what’s called a “sum and difference matrix,” adding one of the Side signals—the plus (+) side—to the Mid signal for the sum, and then subtracting the other Side signal—the minus (-) side—from the Mid signal for the difference. If you’re not completely confused by now, here’s the actual mathematical formula: Mid + (+Side) = left channel Mid + (-Side) = right channel Now, if you listen to just the Mid channel, you get a mono signal. Bring up the two side channels and you’ll hear a stereo spread. Here’s the really cool part—the width of the stereo field can be varied by the amount of Side channel in the mix! Why It Works An instrument at dead center (0 degrees) creates a sound that enters the Mid microphone directly on-axis. But that same sound hits the null spot of the Side figure-8 microphone. The resulting signal is sent equally to the left and right mixer buses and speakers, resulting in a centered image. An instrument positioned 45 degrees to the left creates a sound that hits the Mid microphone and one side of the Side figure-8 microphone. Because the front of the Side mic is facing left, the sound causes a positive polarity. That positive polarity combines with the positive polarity from the Mid mic in the left channel, resulting in an increased level on the left side of the sound field. Meanwhile, on the right channel of the Side mic, that same signal causes an out-of-phase negative polarity. That negative polarity combines with the Mid mic in the right channel, resulting in a reduced level on the right side. An instrument positioned 45 degrees to the right creates exactly the opposite effect, increasing the signal to the right side while decreasing it to the left. What’s The Advantage? One of the biggest advantages of MS recording is the flexibility it provides. Since the stereo imaging is directly dependent on the amount of signal coming to the side channels, raising or lowering the ratio of Mid to Side channels will create a wider or narrower stereo field. The result is that you can change the sound of your stereo recording after it’s already been recorded, something that would be impossible using the traditional XY microphone recording arrangement. Try some experimenting with this—listen to just the Mid channel, and you’ll hear a direct, monophonic signal. Now lower the level of the Mid channel while raising the two Side channels. As the Side signals increase and the Mid decreases, you’ll notice the stereo image gets wider, while the center moves further away. (Removing the Mid channel completely results in a signal that’s mostly ambient room sound, with very little directionality – useful for effect, but not much else.) By starting with the direct Mid sound and mixing in the Side channels, you can create just the right stereo imaging for the track. Another great benefit of MS miking is that it provides true mono compatibility. Since the two Side channels cancel each other out when you switch the mix to mono, only the center Mid channel remains, giving you a perfect monaural signal. Mid-Side recording signal flow. And since the Side channels also contain much of the room ambience, collapsing the mix to mono eliminates that sound, resulting in a more direct mix with increased clarity. Even though most XY recording is mono compatible, the potential for phase cancellation is greater than with MS recording. This is one reason the MS microphone technique has always been popular in the broadcast world. Other Variations While most MS recording is done with a cardioid mic for the Mid channel, varying the Mid mic can create some interesting effects. Try an omni mic pattern on the Mid channel for dramatically increased spaciousness and an extended low frequency response. Experimenting with different combinations of mics can also make a difference. For the most part, both mics should be fairly similar in sound. This is particularly true when the sound source is large, like a piano or choir, because the channels are sharing panning information; otherwise the tone quality will vary across the stereo field. For smaller sources with a narrower stereo field, like an acoustic guitar, matching the mics becomes less critical. With smaller sources, it’s easier to experiment with different, mismatched mics. For example, try a brighter sounding side mic to color the stereo image and make it more spacious. As you can see, there’s a lot more to the MS microphone technique than meets the ear, so give it a try. Even if the technical theory behind it is a bit confusing, in practice you’ll find it to be an incredibly useful method to attain ultimate control of the stereo field in your recordings. Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.