In recent days have been working with the GetFullPocketQueryData API call (finally got my macro tricked out) and can report some interesting information which was speculated about above.
The most significant revelation to me has been that this is a "refresh" process. The call behaves exactly as if it got the list of GCcodes from the gpx file that the PQ-generator created and then does a refresh with 5 logs on these GCcodes with the API. I am not saying that that is how it is programmed (I wouldn't know), just that it behaves exactly this way.
As such it is a completely fresh version of the data which the user can download that is already a bit stale. But it does not include any geocaches that newly became eligible to be in the PQ since the time of that gpx file creation (the query itself is apparently not actually re-run). So geocaches placed since the PQ was originally run are not in the result. Caches that became archived or otherwise changed status are included. Favorite points are included. For all practical purposes that I am aware of, the call has the same outcome as if you took a PQ gpx file, loaded it into GSAK with settings that set the userflag then filter on the userflag and refresh with log number set to 5.
For testing purposes this is quite easy to compare the durations and watch the activity as it progresses.
When I do that, I see no speed advantage. But using the GSAK refresh (with 5 logs) there is a lot of back-and-forth between GSAK and the API server. The API call in question has GSAK waiting 2-3 minutes while the data are packaged and then sent in one uninterrupted stream. Whether one is more load on the server would need to be measured by someone who actually has the tools and access to do that. (Clearly we do not). But if what I see is an indication, then the load is pretty much the same.
The most significant revelation to me has been that this is a "refresh" process. The call behaves exactly as if it got the list of GCcodes from the gpx file that the PQ-generator created and then does a refresh with 5 logs on these GCcodes with the API. I am not saying that that is how it is programmed (I wouldn't know), just that it behaves exactly this way.
As such it is a completely fresh version of the data which the user can download that is already a bit stale. But it does not include any geocaches that newly became eligible to be in the PQ since the time of that gpx file creation (the query itself is apparently not actually re-run). So geocaches placed since the PQ was originally run are not in the result. Caches that became archived or otherwise changed status are included. Favorite points are included. For all practical purposes that I am aware of, the call has the same outcome as if you took a PQ gpx file, loaded it into GSAK with settings that set the userflag then filter on the userflag and refresh with log number set to 5.
For testing purposes this is quite easy to compare the durations and watch the activity as it progresses.
When I do that, I see no speed advantage. But using the GSAK refresh (with 5 logs) there is a lot of back-and-forth between GSAK and the API server. The API call in question has GSAK waiting 2-3 minutes while the data are packaged and then sent in one uninterrupted stream. Whether one is more load on the server would need to be measured by someone who actually has the tools and access to do that. (Clearly we do not). But if what I see is an indication, then the load is pretty much the same.