My answer: /users.json
. HTTP is optimized for large-grain hypermedia transfer; caching is a big part of this, and none of the URI schemes given above are very cache-friendly.
Squid, for example, is a popular HTTP cache that by default will not cache any URL that has a querystring. In addition, many clients and even servers and intermediaries generate and consume query string parameters in an undefined order; that is, "?a=3&b=5" can be arbitrarily rewritten as "?b=5&a=3". However, for HTTP caching, the order matters, and the two pages will be cached separately even though they have the same content. As you add parameters, this problem increases exponentially.
You should design your resources (and their representations) to take advantage of caching by two opposing but complementary techniques:
- Combine fragmented and partial representations into larger, unified representations, and
- Separate large, unified representations into smaller representations along cache boundaries (which tend to be transactional boundaries), but related by hyperlinks.
In your case, step 1 implies combining associations and parts into the "users" representation, without any option for the client to configure which ones and how many. That will allow you to aggressively cache the single response representation without overloading your (and their) caches with a combinatorial explosion of responses due to all the querystring options.
Step 2 implies separating /users.json
into separate "user" entities, each with an "associations" resource and a "parts" resource. So /users/{id}
and /users/{id}/associations
and /users/{id}/parts
. The "/users" resource then returns an array of hyperlinks to the individual "/users/{id}" resources, and each "/users/{id}` representation contains hyperlinks to its associations and parts (that part is more malleable--it might fit your application better to embed the associations and parts into the user resource directly). That will allow you to aggressively cache the response for each "in demand" resource without having to cache your whole database.
Then your users will scream "but that's 10 times the network traffic!" To which you calmly respond, "no, that's 1/10th the network traffic, because 9 times out of 10 the requested resources are already sitting in your client-side (browser) cache (and when they're not, it's 1/10th the server's computational resources since they're sitting in a server-side cache, and when they're not there either, we avoid stampeding with a smart cache on the server)."
Of course, if the /users
resource is something a million new visitors hit every day, then your optimization path might be different. But it doesn't seem so based on your example URI schemes.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…