A neutral middle-man that gives exact timing/response data.
A similar tool for this would be VCR (originally built in Ruby, but ported to other languages since): https://vcrpy.readthedocs.io/en/latest/. This injects itself into the request pipeline, records the result in a local file which can then also be replayed later in tests. It's a quite nice approach when you want to write tests (or just explore) a highly complicated HTTP API without actually hitting it all the time.
It would be good to be be able to have django debug toolbar integration, that way I could see which requests were made to backend APIs without leaving Django.
Having tried MITMProxy something like httpdbg is definitely needed.
do I have to use specific http library?
I usually use strace(1) to track these down, but it's nowhere near as ergonomic as this tool. I'm wondering now if I could patch the `open` built-in instead.
Why is it a special case to track HTTP/s requests, that otherwise couldn't be logged like any other process/function? I'd guess most people use libcurl and you can wrap something around that.
I guess I'm lost on why this is HTTP or Python specific, or if it is, fine.
There is a whole library of so called instrumentation that can monkeypatch standard functions and produce traces of them.
Traces can also propagate across process and rpc, giving you a complete picture, even in a microservice architecture.