duped
Read this book https://ccrma.stanford.edu/~jos/pasp/. It's a hefty tome but kind of the bible of the various approaches to audio signal processing and synthesis.

There's no one method to rule them all, they all have tradeoffs. So there is no "ray tracing" of audio (well actually, it's ray tracing, but for a lot of reasons it's prohibitive to actually do that in a way that sounds good).

> What is this field called?

Audio synthesis. Popular journals with papers covering the topic are DAFX (digital audio effects) and jAES (journal of the audio engineering society). To a lesser extent, the IEEE transactions on audio signal processing.

> I am assuming it has something to do with the physics field of acoustics?

You can find plenty of sources in acoustics journals/text books - but this is like comparing the needs of mechanical and electrical engineers to game engine designers. At a surface level there is crossover, and cross pollination of techniques/tools, but the needs are fundamentally different.

an_aparallel
The type of synthesis you're refering to is "physical modelling" - but from a physics and mathematics perspective - my understanding is that this is refered to as DSP (Digital Signal Processing - which all the resources pointed to here are) computational physics, numeric analysis, fluid dynamics - and most likely a few other names i'm yet to stumble upon.

The closest thing to what you mention is: Modalys by IRCAM (Institute for Research and Coordination in Acoustics/Music)

https://forum.ircam.fr/projects/detail/modalys/

From the readme:

Modalys is IRCAM’s flagship physical model-based sound synthesis environment, used to create virtual instruments from elementary physical objects such as strings, plates, tubes, membranes, plectra, bows, or hammers.

It is also possible to create objects with more complex shapes out of 3D meshes, or using measurements, and Modalys does all the hard computational work for you, bringing them to life and making them sound.

You can run Modalsys inside Ableton's flagship Suite.

I'm very passionate about this area of music/DSP...one day i hope to get involved in it :-)

thrtythreeforty
> Like howling of wind, the sound from musical instruments

The typical technique used to model these is a "digital waveguide" - treating the wind instrument or string as a one-dimensional tube. The wave propagating down the tube is sampled at your sample rate spatially, so the computer model consists of a pair of queues of samples propagating in each direction. A typical guitar string might have a pair of buffers 300-600 audio samples long, and you move pointers into those buffers every cycle to model the propagation. I'll link to PASP as others are doing for a diagram:

https://ccrma.stanford.edu/~jos/pasp/Ideal_Acoustic_Tube.htm...

At the end of the waveguide (typically where the string is attached to the bridge, or the sound hole of a tube is), the traveling wave encounters an impedance discontinuity, exactly like an electrical wave on a transmission line would, except the units are different. This discontinuity causes the energy in the wave to reflect, and as this repeats, the string vibrates.

That's the core technique. There is so much research on top of this, backed by hard math, to model other nonlinear behaviors of strings, to add other components to the model, etc. I find it fascinating how physical modeling is very well studied within a very small circle of researchers, and nearly nobody else has heard of the concepts.

Happy to discuss further, either here or over email, if you or anyone else has questions!

jfkw
Engine Simulator is an interesting example of this type of work: https://www.youtube.com/@AngeTheGreat
solardev
There's quite a bit of this in the video & PC game world. Early on, technologies like Aureal A3D (https://en.wikipedia.org/wiki/Aureal_Semiconductor#A3D) or Creative EAX (https://en.wikipedia.org/wiki/Environmental_Audio_Extensions) tried to do simple simulations of reverb and spatial audio. The gave way to OpenAL (https://en.wikipedia.org/wiki/OpenAL), I guess more recently it's Wwise (https://www.audiokinetic.com/en/wwise/wwise-spatial-audio/).

The Wwise page is especially interesting, with a few paragraphs on audio positioning and reflections simulated inside level geometry.

coolhand2120
A few companies have commercialized the concept of physical audio modeling. Audio modeling allows for complex control surfaces (e.g.: https://roli.com/products/seaboard/rise2) that use MPE (Multi polyphonic expression).

https://audiomodeling.com/. Creates the SWAM technology and provides a whole bunch of physically modeled instruments, like horns, winds, and strings.

For more fundamental sounds check out Ableton Suite

https://www.ableton.com/en/live/compare-editions/#software-i...

Ableton provides a number of physically modeled instruments. Tension = string, collision = mallet/percussion, and a few more.

Both are quite expensive and require a somewhat powerful machine.

While these sound cool, a sampler, that is a synth that plays recorded versions of the real instrument, sounds much more "real" than a modeled version. That's thanks to the Fourier transform. https://www.andreinc.net/2024/04/24/from-the-circle-to-epicy...

A good place to start on sound in general is at Bartosz Ciechanowski's blog on sound: https://ciechanow.ski/sound/

nodenoise
This is essentially sound design from first principles. There's a good book here: https://www.amazon.com/Designing-Sound-Press-Andy-Farnell/dp... Note that the software used (Pure Data) can be replaced by another high-level language (SuperCollider: https://supercollider.github.io/) pretty easily. I know of no "tool" to do what you want because there are few things that are universal to different kinds of natural and unnatural sound. (Note: study acoustics and psycho-acoustics to better understand why the former is true.)
stefanha
> the sound from musical instruments

Pianoteq (https://en.wikipedia.org/wiki/Pianoteq) is a physically modeled collection of instruments (mostly pianos). Runs even on a Raspberry Pi and sounds like the real deal without gigabytes of prerecorded samples. Super impressive what physical modeling can achieve.

iainctduncan
It's called Physical Modelling, and it's a big sub-branch of audio synthesis. A very good intro is Perry Cook's "Real Sound Synthesis" book.

To understand the physics behind the software, the field you need to read is called Acoustics. A great intro (and the textbook we used in grad school) is "The Science of Sound" by Rossing et. al.

(And +1 to Julius Smith book recommendations).

hn30000
Some good posts here. Another term to look into is auralization (often discussed in reference to room acoustics). I have done some research into architectural acoustics and real-time audio rendering tools and am happy to discuss further (though it is a big field and I definitely don’t know it all), anyone reading this should feel free to shoot me a message.
Reubend
This is called "physical modelling". Creating sound from a physical simulation is popular for things like plucked strings, but it's very difficult for more complicated instruments like harmonicas
Slow_Hand
The field you're looking for in audio is known as physical modeling, it's a facet of the broader field of audio synthesis (along with subtractive synthesis, FM synthesis, granular synthesis, etc).

Physical modeling is primarily used to simulate musical instruments or create non-existent instruments like a 40 ft wide frame drum or a saxophone made of rubber.

I don't see it used that often to simulate non-musical noises, but I wouldn't be surprised if there were other people out there doing it.

joe_away629
Search "physical modeling" on dafx.de: https://www.dafx.de/paper-archive/search.php?q=physical+mode...

You might also like to search "analog modeling" or "virtual analog" for models of analog electronic circuits (distortion pedals, filters, etc.)

Book about modeling, eg., car engines in Pure Data (eg., for games): https://mitpress.mit.edu/9780262014410/designing-sound/

You might check out the source code to Émilie Gillet's (of Mutable Instruments) Rings and Elements modules: https://github.com/pichenettes/eurorack

"Designing the Make Noise Erbe-Verb" Reverb Design Lecture": https://www.youtube.com/watch?v=Il_qdtQKnqk&t=18s

Sean Costello's (of Valhalla DSP) blog about mostly reverbs: https://valhalladsp.wordpress.com/

binkethy
Well, howling of wind can be achieved with simply low-pass filtering white noise and varying the cutoff frequency.

What you are referring to is known as physical modeling, and there are many techniques employed, depending upon the sort of system one seeks to model.

musha68k
Expressive amongst some others do sell some impressive physical modelling tech:

https://youtu.be/oTxgHJMpKsw?si=ibIX9pxxl3Sh--xy

If you want to delve deeper on the open source side with csound, super collider and more:

Csound - wg (waveguide) opcodes (e.g. wgpluck, wgbow)

https://csound.com/docs/manual/SiggenWavguide.html

SuperCollider - Stk (Synthesis ToolKit) plugins (e.g. StkBowed, StkPlucked)

https://github.com/supercollider/sc3-plugins

Faust - Functional audio processing

https://faustlibraries.grame.fr/libs/physmodels/

https://youtu.be/u8WTnQPzL2w?si=DY5J-ktLIYLdM_fB

Pure Data (Pd) - pmpd (Physical Modeling for Pure Data) library

https://puredata.info/downloads/pmpd

tlarkworthy
The pluckable string demo on ycombinator the other day is pretty cool https://news.ycombinator.com/item?id=40442595

Just simulating a string vibrating has a huge variety of sounds depending on how its harmonics pile up and where you sample from it (like where the pickup is on an electric guitar).

filoleg
A lot of great suggestions in the thread, but I am surprised no one had mentioned Syntorial[0] yet. Especially given how widely recommended it is (seen it both on HN and elsewhere many times over the years). For context: it is over a decade old (if not more), but is constantly updated and improved, and none of it is outdated (not that fundamental sound synthesis can get outdated, but i digress).

It is basically a sound synthesis learning tool (and is also a synth in itself). The final goal of the whole thing is to essentially learn to be able to go from “sound in my head” to “sound in my synth” on your own. You start with a base sin wave (or a square pulse, i forgot), and you go from there. It goes quite a bit into the fundamental physics of it, like harmonics/frequency interaction, oscilattors of different types and how could they be used, different wave interactions, etc.

The tool is also good at giving practical intuition tips and not handholding you the entire time, so you get a good amount of room to experiment and try things to get it right. Which is imo the most invaluable part of the whole tool, as it doesn’t just tell you exactly what to do every time, but gives the exact amount info you would need to get there yourself.

It is very “from the first principles,” in a sense that you start with a basic oscillating wave, and everything just naturally builds up on top of it. I really liked how it basically simulated the process of discovering all that on your own (but in a much more efficient and educative way). It blew my mind at the time, not in the last part because it straight up opened my eyes to the fact that all a sound ultimately is is a collection of very basic oscillators and waves that get DSPd and “basic physics’d” into a sound with a unique harmonic profile. It is difficult to unsee that revelation and not think of sounds that way afterworks, which affirms to me that it was indeed fundamentally shifting in my understanding of the topic.

Disclaimer: I am yet to finish the full thing (planning to do so, eventually), but the “from the first principles” approach they use is great and sounds very much up your alley.

0. https://www.syntorial.com/

moffkalast
Man do I have a brilliant youtube channel to point you at: https://www.youtube.com/@AngeTheGreat/videos
theoriginaldave
I know companies like Siemens sell very enhanced acoustic modeling and simulation tools. The software is commercial, but if they use standards or open libraries you might be able to make use of them, at least as a starting point for research. https://plm.sw.siemens.com/en-US/simcenter/simulation-test/a...
janosdebugs
Unreal Engine and Steam Audio may be worth looking into before you invest significant amounts of time into this.
a1o
If you need for game reasons, I would advise to inspect the code of OpenAL soft which has lots of interesting concepts implemented and also to read the documentation on Steam Audio - this one is also open source too, so you can take a look on valve's implementation here: https://github.com/ValveSoftware/steam-audio

I would at least take a look on how HRTF works as you need to account how humans experience sounds too when simulating audio things.

chabes
Some VCV Rack modules do physical modeling. You can also develop your own modules for VCV Rack, as it’s mostly open source.

The two that immediately come to mind are:

- the Tube Unit module from Sapphire: https://github.com/cosinekitty/sapphire/blob/main/TubeUnit.m...

…from the docs: “Tube Unit is loosely based on a physical acoustics model of a resonant tube, but with some fanciful departures from real-world physics to make it more fun.”

And

- Elastika (also Sapphire): https://github.com/cosinekitty/sapphire/blob/main/Elastika.m...

…from the docs: “The physics model includes a network of balls and springs connected in a hexagonal grid pattern.”

Another that comes to mind is Elements, from Mutable Instruments (called the Modal Synth, from Audible Instruments, in VCV Rack). It’s not necessarily a physical model, but more a physical imitation. You get control over bowing, blowing, and striking impulses, as well as the main structure of resonance… from the manual:

* Internally uses 64 zero-delay feedback state variable filters.

* Coarse, fine and FM frequency controls.

* Geometry: Interpolates through a collection of structures, including plates, strings, tubes, bowls.

* Brightness: Specifies the character of the material the resonating structure is made of – from wood to glass, from nylon to steel.

* Damping: Adds damping to the sound – simulates a wet material or the muting of the vibrations.

* Position: Specifies at which point the structure is excited.

* Space: Creates an increasingly rich stereo output by capturing the sound at two different points of the structure, and then adds more space through algorithmic reverberation.

https://pichenettes.github.io/mutable-instruments-documentat...

mitthrowaway2
It's a bit old now, but you might be interested in Phya: Physical audio for virtual worlds

https://phyacode.wordpress.com/2016/01/30/phya/

https://repository.gatech.edu/bitstream/1853/50024/1/Menzies...

hammock
"Audio engines" exist.. they are called VSTs or plugins. The ones with the most physical modeling are echo and reverb plugins.. modeling a physical environment and then reproducing it digitally. Or noise reduction plugins - reverse engineering various "noise" sources and then subtracting them. Very math intensive.

There are also analog versions of this.. guitar pedals that create echo and reverb that "model" a room sound via physical internal parts

FractalHQ
Download a free version of Chromophone or Logic Pro and play with physical modeling synths yourself for endless fun!
jayd16
nVidia has had some implementation of path traced audio in their VR SKDs for quite a while. I don't think its that's popular, though.

https://www.youtube.com/watch?v=Ozhywx2YbzM

juancn
There's a course from SIGGRAPH 2016 about physically based sound that's fairly good: https://www.youtube.com/watch?v=4OGeAfyDa4Y

It can serve as a starting point.

an_aparallel
I also recommend Bart Hopkin's book "Musical Instrument Design" - which is a practical crash course in DIY musical instrument making / mixed in with a below informal instroduction to acoustics/physics.
phibz
Look up the YouTube series work from AngeTheGreat. He is building a realistic physics based parametric audio simulation of car exhaust.

Perhaps more entertainment than education but he is a great place to start.

kretaceous
I can't be of much help here but

> I used to think that alot of physics was bs until I found a tutorial on Box 2D physics.

> The tutorials used real physics concepts, like momentum, coefficient of friction and others to simultate physical (mechanical properties of items on a computer).

Can you share these said tutorials if possible? Thanks!

iKlsR
Can you share the tutorial you found?
fregonics
Could you indicate the tutorial on Box 2D physics you mentioned? I'm interested
firebot
There's MIDI, of course. Then you have things like fruity loops and similar.

Then of course there's all kinds of synths for FL studio.

Audio is ultimately analog, and requires a DAC, digital to analog conversion. I would think you might have some fun with some hardware synths.

echoangle
Can you explain what you mean by “ I used to think that alot of physics was bs until I found a tutorial on Box 2D physics.”? Do you mean you thought physics as in the description of the real world was bs? Or physics as in computer physics engines? Did you not believe the physical laws were properly describing the real world?
rapjr9
There is the synthesis of sounds and there are a variety of models for that, such as FM synthesis, additive harmonic synthesis, physical modeling, wavetable synthesis, granular synthesis and more. Physical modeling is somewhat like ray tracing, modeling the materials and sound paths in math to imitate a real instrument (or any object) making sound.

There is modeling of acoustic environments, which is very much like the ray tracing of audio, dealing with reflections, damping, phase, refraction, dispersion and reverberation.

There is the physics of audio perception, how the ear works, how the brain perceives sound, psychoacoustics. What makes your ear "think" you are indoors or outdoors?

There are encoding standards, AAC, WAV, MP3, FLAC, which can also get into psychoacoustics.

There are algorithms, like DSP signal processing algorithms for audio which do reverb, EQ, echo, psychoacoustics, sound projection/direction, flanging, surround sound, and a lot more for shaping sounds.

There is the study of how to build optimal acoustic spaces, like concert halls and music studio performance rooms. More psychoacoustics, but some math and sensors also.

There is the art of trying to use all the above to accomplish something, a song, a soundtrack, a concert, a jingle.

So there are a lot of different aspects to audio, and I've probably left some out of the above list (voice, foley, sound effects, subsonic, supersonic,...). I'd suggest you first work on defining your problem better, what do you actually want to solve, and for what audience? There are many tools that do a lot of this already. If you start out with the most basic audio physics, how sound travels through a variety of materials, and what happens at transitions between materials you could probably spend many years working on just that (as a comprehensive system) before you even get to trying to generate sounds. I don't think there is a complete system that can be used to predict all aspects of sound propagation through materials (including air) because materials can be extremely complex, for example air has air currents, variations in humidity, air pressure is variable in time and space, and the sounds in air are also strongly shaped by reflections and the qualities of the surfaces it reflects from. Imagine the complexity of modeling all the trees, leaves, plants and rocks in a forest. You can "record" the effects of one specific space in a forest using impulse response recordings and apply it to sounds, but to model it from scratch and be able to recreate any forest location would probably be a lifetimes work, perhaps several lifetimes, unless you can find a clever way to do it (maybe generate artifical fractal landscapes based on parameters measured from real places, and then do acoustic ray tracing?)

Acoustic "ray tracing" is not like light ray tracing because sound is much lower frequency and diffracts, it bends around objects instead of bouncing off, and also bounces off. The phase of the interacting "rays" or waveforms as they reach the ear also matters a lot more with audio than with light which makes the computations more complex. In a light ray tracer light travels in straight lines from sources to virtual camera. In a sound ray tracer sound can bounce and refract and disperse from anywhere to reach the virtual ear. What is on the other side of a wall can matter to what a room sounds like.

Don't let my talking of complexity deter you though, look and maybe you'll find something nobody has thought of before. It might be a matter of plugging together some tools that already exist (e.g., a 3D definition of a room or outdoor space, sound source models, choosing a virtual ear location, and let the computer crunch for a few days.)

d--b
> I used to think that alot of physics was bs until I found a tutorial on Box 2D physics.

Haha. Thanks Newton for Angry Birds.