The closest thing to what you mention is: Modalys by IRCAM (Institute for Research and Coordination in Acoustics/Music)
https://forum.ircam.fr/projects/detail/modalys/
From the readme:
Modalys is IRCAM’s flagship physical model-based sound synthesis environment, used to create virtual instruments from elementary physical objects such as strings, plates, tubes, membranes, plectra, bows, or hammers.
It is also possible to create objects with more complex shapes out of 3D meshes, or using measurements, and Modalys does all the hard computational work for you, bringing them to life and making them sound.
You can run Modalsys inside Ableton's flagship Suite.
I'm very passionate about this area of music/DSP...one day i hope to get involved in it :-)
The typical technique used to model these is a "digital waveguide" - treating the wind instrument or string as a one-dimensional tube. The wave propagating down the tube is sampled at your sample rate spatially, so the computer model consists of a pair of queues of samples propagating in each direction. A typical guitar string might have a pair of buffers 300-600 audio samples long, and you move pointers into those buffers every cycle to model the propagation. I'll link to PASP as others are doing for a diagram:
https://ccrma.stanford.edu/~jos/pasp/Ideal_Acoustic_Tube.htm...
At the end of the waveguide (typically where the string is attached to the bridge, or the sound hole of a tube is), the traveling wave encounters an impedance discontinuity, exactly like an electrical wave on a transmission line would, except the units are different. This discontinuity causes the energy in the wave to reflect, and as this repeats, the string vibrates.
That's the core technique. There is so much research on top of this, backed by hard math, to model other nonlinear behaviors of strings, to add other components to the model, etc. I find it fascinating how physical modeling is very well studied within a very small circle of researchers, and nearly nobody else has heard of the concepts.
Happy to discuss further, either here or over email, if you or anyone else has questions!
The Wwise page is especially interesting, with a few paragraphs on audio positioning and reflections simulated inside level geometry.
https://audiomodeling.com/. Creates the SWAM technology and provides a whole bunch of physically modeled instruments, like horns, winds, and strings.
For more fundamental sounds check out Ableton Suite
https://www.ableton.com/en/live/compare-editions/#software-i...
Ableton provides a number of physically modeled instruments. Tension = string, collision = mallet/percussion, and a few more.
Both are quite expensive and require a somewhat powerful machine.
While these sound cool, a sampler, that is a synth that plays recorded versions of the real instrument, sounds much more "real" than a modeled version. That's thanks to the Fourier transform. https://www.andreinc.net/2024/04/24/from-the-circle-to-epicy...
A good place to start on sound in general is at Bartosz Ciechanowski's blog on sound: https://ciechanow.ski/sound/
Pianoteq (https://en.wikipedia.org/wiki/Pianoteq) is a physically modeled collection of instruments (mostly pianos). Runs even on a Raspberry Pi and sounds like the real deal without gigabytes of prerecorded samples. Super impressive what physical modeling can achieve.
To understand the physics behind the software, the field you need to read is called Acoustics. A great intro (and the textbook we used in grad school) is "The Science of Sound" by Rossing et. al.
(And +1 to Julius Smith book recommendations).
Physical modeling is primarily used to simulate musical instruments or create non-existent instruments like a 40 ft wide frame drum or a saxophone made of rubber.
I don't see it used that often to simulate non-musical noises, but I wouldn't be surprised if there were other people out there doing it.
You might also like to search "analog modeling" or "virtual analog" for models of analog electronic circuits (distortion pedals, filters, etc.)
Book about modeling, eg., car engines in Pure Data (eg., for games): https://mitpress.mit.edu/9780262014410/designing-sound/
You might check out the source code to Émilie Gillet's (of Mutable Instruments) Rings and Elements modules: https://github.com/pichenettes/eurorack
"Designing the Make Noise Erbe-Verb" Reverb Design Lecture": https://www.youtube.com/watch?v=Il_qdtQKnqk&t=18s
Sean Costello's (of Valhalla DSP) blog about mostly reverbs: https://valhalladsp.wordpress.com/
What you are referring to is known as physical modeling, and there are many techniques employed, depending upon the sort of system one seeks to model.
https://youtu.be/oTxgHJMpKsw?si=ibIX9pxxl3Sh--xy
If you want to delve deeper on the open source side with csound, super collider and more:
Csound - wg (waveguide) opcodes (e.g. wgpluck, wgbow)
https://csound.com/docs/manual/SiggenWavguide.html
SuperCollider - Stk (Synthesis ToolKit) plugins (e.g. StkBowed, StkPlucked)
https://github.com/supercollider/sc3-plugins
Faust - Functional audio processing
https://faustlibraries.grame.fr/libs/physmodels/
https://youtu.be/u8WTnQPzL2w?si=DY5J-ktLIYLdM_fB
Pure Data (Pd) - pmpd (Physical Modeling for Pure Data) library
Just simulating a string vibrating has a huge variety of sounds depending on how its harmonics pile up and where you sample from it (like where the pickup is on an electric guitar).
It is basically a sound synthesis learning tool (and is also a synth in itself). The final goal of the whole thing is to essentially learn to be able to go from “sound in my head” to “sound in my synth” on your own. You start with a base sin wave (or a square pulse, i forgot), and you go from there. It goes quite a bit into the fundamental physics of it, like harmonics/frequency interaction, oscilattors of different types and how could they be used, different wave interactions, etc.
The tool is also good at giving practical intuition tips and not handholding you the entire time, so you get a good amount of room to experiment and try things to get it right. Which is imo the most invaluable part of the whole tool, as it doesn’t just tell you exactly what to do every time, but gives the exact amount info you would need to get there yourself.
It is very “from the first principles,” in a sense that you start with a basic oscillating wave, and everything just naturally builds up on top of it. I really liked how it basically simulated the process of discovering all that on your own (but in a much more efficient and educative way). It blew my mind at the time, not in the last part because it straight up opened my eyes to the fact that all a sound ultimately is is a collection of very basic oscillators and waves that get DSPd and “basic physics’d” into a sound with a unique harmonic profile. It is difficult to unsee that revelation and not think of sounds that way afterworks, which affirms to me that it was indeed fundamentally shifting in my understanding of the topic.
Disclaimer: I am yet to finish the full thing (planning to do so, eventually), but the “from the first principles” approach they use is great and sounds very much up your alley.
I would at least take a look on how HRTF works as you need to account how humans experience sounds too when simulating audio things.
The two that immediately come to mind are:
- the Tube Unit module from Sapphire: https://github.com/cosinekitty/sapphire/blob/main/TubeUnit.m...
…from the docs: “Tube Unit is loosely based on a physical acoustics model of a resonant tube, but with some fanciful departures from real-world physics to make it more fun.”
And
- Elastika (also Sapphire): https://github.com/cosinekitty/sapphire/blob/main/Elastika.m...
…from the docs: “The physics model includes a network of balls and springs connected in a hexagonal grid pattern.”
Another that comes to mind is Elements, from Mutable Instruments (called the Modal Synth, from Audible Instruments, in VCV Rack). It’s not necessarily a physical model, but more a physical imitation. You get control over bowing, blowing, and striking impulses, as well as the main structure of resonance… from the manual:
* Internally uses 64 zero-delay feedback state variable filters.
* Coarse, fine and FM frequency controls.
* Geometry: Interpolates through a collection of structures, including plates, strings, tubes, bowls.
* Brightness: Specifies the character of the material the resonating structure is made of – from wood to glass, from nylon to steel.
* Damping: Adds damping to the sound – simulates a wet material or the muting of the vibrations.
* Position: Specifies at which point the structure is excited.
* Space: Creates an increasingly rich stereo output by capturing the sound at two different points of the structure, and then adds more space through algorithmic reverberation.
https://pichenettes.github.io/mutable-instruments-documentat...
https://phyacode.wordpress.com/2016/01/30/phya/
https://repository.gatech.edu/bitstream/1853/50024/1/Menzies...
There are also analog versions of this.. guitar pedals that create echo and reverb that "model" a room sound via physical internal parts
It can serve as a starting point.
Perhaps more entertainment than education but he is a great place to start.
> I used to think that alot of physics was bs until I found a tutorial on Box 2D physics.
> The tutorials used real physics concepts, like momentum, coefficient of friction and others to simultate physical (mechanical properties of items on a computer).
Can you share these said tutorials if possible? Thanks!
Then of course there's all kinds of synths for FL studio.
Audio is ultimately analog, and requires a DAC, digital to analog conversion. I would think you might have some fun with some hardware synths.
There is modeling of acoustic environments, which is very much like the ray tracing of audio, dealing with reflections, damping, phase, refraction, dispersion and reverberation.
There is the physics of audio perception, how the ear works, how the brain perceives sound, psychoacoustics. What makes your ear "think" you are indoors or outdoors?
There are encoding standards, AAC, WAV, MP3, FLAC, which can also get into psychoacoustics.
There are algorithms, like DSP signal processing algorithms for audio which do reverb, EQ, echo, psychoacoustics, sound projection/direction, flanging, surround sound, and a lot more for shaping sounds.
There is the study of how to build optimal acoustic spaces, like concert halls and music studio performance rooms. More psychoacoustics, but some math and sensors also.
There is the art of trying to use all the above to accomplish something, a song, a soundtrack, a concert, a jingle.
So there are a lot of different aspects to audio, and I've probably left some out of the above list (voice, foley, sound effects, subsonic, supersonic,...). I'd suggest you first work on defining your problem better, what do you actually want to solve, and for what audience? There are many tools that do a lot of this already. If you start out with the most basic audio physics, how sound travels through a variety of materials, and what happens at transitions between materials you could probably spend many years working on just that (as a comprehensive system) before you even get to trying to generate sounds. I don't think there is a complete system that can be used to predict all aspects of sound propagation through materials (including air) because materials can be extremely complex, for example air has air currents, variations in humidity, air pressure is variable in time and space, and the sounds in air are also strongly shaped by reflections and the qualities of the surfaces it reflects from. Imagine the complexity of modeling all the trees, leaves, plants and rocks in a forest. You can "record" the effects of one specific space in a forest using impulse response recordings and apply it to sounds, but to model it from scratch and be able to recreate any forest location would probably be a lifetimes work, perhaps several lifetimes, unless you can find a clever way to do it (maybe generate artifical fractal landscapes based on parameters measured from real places, and then do acoustic ray tracing?)
Acoustic "ray tracing" is not like light ray tracing because sound is much lower frequency and diffracts, it bends around objects instead of bouncing off, and also bounces off. The phase of the interacting "rays" or waveforms as they reach the ear also matters a lot more with audio than with light which makes the computations more complex. In a light ray tracer light travels in straight lines from sources to virtual camera. In a sound ray tracer sound can bounce and refract and disperse from anywhere to reach the virtual ear. What is on the other side of a wall can matter to what a room sounds like.
Don't let my talking of complexity deter you though, look and maybe you'll find something nobody has thought of before. It might be a matter of plugging together some tools that already exist (e.g., a 3D definition of a room or outdoor space, sound source models, choosing a virtual ear location, and let the computer crunch for a few days.)
Haha. Thanks Newton for Angry Birds.
There's no one method to rule them all, they all have tradeoffs. So there is no "ray tracing" of audio (well actually, it's ray tracing, but for a lot of reasons it's prohibitive to actually do that in a way that sounds good).
> What is this field called?
Audio synthesis. Popular journals with papers covering the topic are DAFX (digital audio effects) and jAES (journal of the audio engineering society). To a lesser extent, the IEEE transactions on audio signal processing.
> I am assuming it has something to do with the physics field of acoustics?
You can find plenty of sources in acoustics journals/text books - but this is like comparing the needs of mechanical and electrical engineers to game engine designers. At a surface level there is crossover, and cross pollination of techniques/tools, but the needs are fundamentally different.