[Freebase-discuss] Why do MQL queries Timeout, if they have not completed ? WAS Re: filtering date_of_birth
masouras at google.com
Tue Oct 5 05:16:27 UTC 2010
The freebase python mqlreaditer uses cursors behind the scenes to iterate
through a list of results and it's very useful if your query is easy to
compute, but has a very large output set (like your people name/gender
example). There are cases where the query cannot finish in 8 secs though,
in which case you do need a higher timeout.
On Mon, Oct 4, 2010 at 9:14 PM, Tom Morris <tfmorris at gmail.com> wrote:
> On Mon, Oct 4, 2010 at 9:27 PM, Thad Guidry <thadguidry at gmail.com> wrote:
> > The alternative is downloading everything, and using the TSV as Shawn
> > noted. But then Tom mentions an alternative that I hadn't heard about
> > before where he "batches" using MQL ?
> The client libraries have a 'mqlreaditer' method which presents the
> programmer with a 'one result at a time' abstraction, but behind the
> scene fetches the results in chunks (defaulting to 100 at a time which
> is the default MQL limit value).
> As it happens, I did a similar study to Shawn's back in July. I was
> principally interested in name forms (and what percentage of the names
> were messed up by things like Open Library), so I was only looking at
> names and genders of people, but I was able to download 1,674,877
> names and genders in 54 minutes. Now that was running flat out, which
> I don't recommend (I was running the experiment against the sandbox
> during off-peak hours), and it was probably technically over my daily
> MQL query quota, but it does demonstrate that it's not that hard to
> download significant chunks of data from the live graph.
> You are receiving this message because you are subscribed to the
> Freebase-discuss mailing list.
> To post a message to the list: Freebase-discuss at freebase.com
> To unsubscribe, view archives, etc:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Freebase-discuss