The web evolved from being a tool to access documents on directories to this whole apps in the cloud thing and we kept using the same tree abstraction to sync state to the server, which doesn't make a lot of sense in lots of places.
Maybe we need a better abstraction to begin with, something like discoverable native RPCs using a protocol designed for it like thrift or grpc.
This is a "work in progress". Is there any estimation of when it will be finalized by? Something like during 2025, and frameworks/libraries starting to support it by something like 2026? Just to have a reference, anyone remember how long it took for PATCH?
That part sounds like it's asking for trouble. I'm curious if this will make it to the final draft. If the client mis-identifies which parts of the request body are semantically insignificant, the result would be immediate cache poisoning and fun hard-to-debug bugs.
If it's meant as a "MAY", then that seems kind of meaningless: If the client for some reason knows that one particular aspect of the request body is insignificant, it could just generate request bodies that are normalized in the first place..?
OPTIONS and PROPFIND don't get it.
There should be an HTTP method (or a .well-known url path prefix) to query and list every Content-Type available for a given URL.
From https://x.com/westurner/status/1111000098174050304 :
> So, given the current web standards, it's still not possible to determine what's lurking behind a URL given the correct headers? Seems like that could've been the first task for structured data on the internet
Why an HTTP Get Request Shouldn’t Have a Body - https://www.baeldung.com/cs/http-get-with-body - July 2024
The use-case 4.2 with both Content-Location and Location feels weird to me. Not sure you would want multiple urls with different meanings. Isn't it harder to keep it idempotent if we are generating URLs for request and result? Not sure Location is generally meaningful in 200... that may impact other RFC.
Could be interesting to see a sample where query is created but result will be available later. That's probably just the 303 See Other with a Retry-After?
Sounds like GraphQL won.
So how about simply removing those limits on GET requests?
IETF 2034: The HTTP TELLME Method
IETF 2038: The HTTP WHATIF Method
QUERY /contacts HTTP/1.1
Host: example.org
Content-Type: example/query
Accept: text/csv
select surname, givenname, email limit 10
Not quite full SQL (no JOIN or WHERE in any examples I see)Hmm... as long as you handle authentication/authorization correctly, why is it bad?
It's a way to pluck certain JSON fields that were otherwise going to be returned? Kind of like one of the benefits of GraphQL? Will this catch on?
> The QUERY method provides a solution that spans the gap between the use of GET and POST. As with POST, the input to the query operation is passed along within the content of the request rather than as part of the request URI. Unlike POST, however, the method is explicitly safe and idempotent, allowing functions like caching and automatic retries to operate.