Skip to content

Performance issues with large data sets #268

@mwilliamson-healx

Description

@mwilliamson-healx

For our use case, we send a few thousand objects to the client. We're currently using a normal JSON API, but are considering using GraphQL instead. However, when returning a few thousand objects, the overhead of resolving values makes it impractical to use. For instance, the example below returns 10000 objects with an ID field, and that takes around ten seconds to run.

Is there a recommended way to improve the performance? The approach I've used successfully so far is to use the existing parser to parse the query, and then generate the response by creating dictionaries directly, which avoids the overhead of resolving/completing on every single value.

import graphene

class UserQuery(graphene.ObjectType):
    id = graphene.Int()

class Query(graphene.ObjectType):
    users = graphene.Field(UserQuery.List())

    def resolve_users(self, args, info):
        return users

class User(object):
    def __init__(self, id):
        self.id = id

users = [User(index) for index in range(0, 10000)]

schema = graphene.Schema(query=Query)

print(schema.execute('{ users { id } }').data)

Activity

ekampf

ekampf commented on Sep 2, 2016

@ekampf
Contributor

@mwilliamson-healx Ive had the same problem.
Fortunately the next version of Graphene fixes the issue (and also adds performance tests to make sure it doesnt regress).
Though its a risk using the library's bleeding edge, Ive been running the next version (pip install graphene>=1.0.dev) for a couple of weeks now in production without problems.

So you should give it a try and see if it solves your problem (and if not, maybe there's some new performance test cases to add to Graphene's performance tests)

syrusakbary

syrusakbary commented on Sep 2, 2016

@syrusakbary
Member

@mwilliamson-healx as Eran pointed, the next version its been rewritten with a special focus on performance.

We also added a benchmark for a similar case you are exposing (retrieving about 100k elements instead of 10k).
https://github.com/graphql-python/graphene/blob/next/graphene/types/tests/test_query.py#L129

The time spent for retrieving 10k elements should be about 10-20 times faster in the next branch (50-100ms?).
https://travis-ci.org/graphql-python/graphene/jobs/156652274#L373

Would be great if you could test this case in the next branch and expose if you run into any non-performant case, I will happily work on that :).

mwilliamson-healx

mwilliamson-healx commented on Sep 5, 2016

@mwilliamson-healx
Author

Thanks for the suggestion! I gave Graphene 1.0.dev0 a go, and while it's certainly faster, it still takes around a second to run the example above. Admittedly, I didn't try it out on the speediest of machines, but suggests that it would still be the dominant factor in response time for our real data.

syrusakbary

syrusakbary commented on Sep 5, 2016

@syrusakbary
Member

@mwilliamson-healx some of the performance bottleneck was also in the OrderedDict generation.
For that graphql-core uses cyordereddict when available (a implementation of OrderedDict in Cython that runs about 2-6x faster).

Could you try installing cyordereddict with pip install cyordereddict and running again the tests? (no need to modify anything in the code).

Thanks!

syrusakbary

syrusakbary commented on Sep 5, 2016

@syrusakbary
Member

PS: There are plans to port some code to Cython (while still preserving the Python implementation) to make graphene/graphql-core even more performant, however any other suggestion would be always welcome! :)

mwilliamson-healx

mwilliamson-healx commented on Sep 6, 2016

@mwilliamson-healx
Author

Thanks again for the suggestion! Using cyordereddict shaves about 200ms off the time (from 1s to 0.8s), so an improvement, but still not ideal. I had a look around the code, but nothing stuck out to me as an easy way of improving performance. The problem (from my extremely quick and poorly informed glance!) is that you end up resolving every single value, which includes going through any middleware and having to coordinate promises. Keeping that functionality while being competitive with just spitting out dicts directly seems rather tricky.

The proof of concept I've got sidesteps the issue somewhat by parsing the GraphQL query, and then relying on the object types being able to generate the requested data directly, without having to further resolve values. It's very much a proof of concept (so doesn't support fragments, and isn't really GraphQL compliant yet), but feel free to have a look. Assuming the approach is sane, then it's hard to see how to reconcile that approach with the normal GraphQL resolve approach.

syrusakbary

syrusakbary commented on Sep 6, 2016

@syrusakbary
Member

Hi @mwilliamson-healx,
At first congrats for your great proof of concept!

I've been thinking for a while how we can improve performance in GraphQL. This repository -graphene- uses graphql-core under the hood which is a very similar port of the GraphQL-js reference implementation.

The problem we are seeing is that either in graphql-core and graphql-js that each type/value is checked in runtime (what I mean is that the resolution+serialization function is "discovered" in runtime each time a value is completed). In js the performance difference is not as big as it usually have a great JIT that optimizes each of the type/value completion calls. However as Python doesn't have any JIT by default, this result in a quite expensive operation.

In the current graphql-js and graphql-core implementations if you want to execute a GraphQL query this is how the process will look like:

Parse AST from string (==> validate the AST in the given schema) ==> Execute a AST given a Root type.

However we can create a "Query Builder" as intermediate step before executing that will know exactly what are the fields we are requesting and therefore it's associated types and resolvers, so we don't need to "search" for them each time we are completing the value.
This way, the process will be something like:

Parse AST from string (==> validate the AST in the given schema) ==> Build the Query resolver based in the AST ==> Execute the Query resolver builder given a Root type.

Your proof of concept is doing the latter so the performance difference is considerable comparing with the current graphql-core implementation.

I think it's completely reasonable to introduce this extra Query resolver build step before executing for avoid the performance bottleneck of doing it in runtime. In fact, I would love to have it in graphql-core.

And I also think this would be super valuable to have it too in the graphql-js implementation as it will improve performance and push forward other language implementations ( @leebyron ).

mwilliamson

mwilliamson commented on Sep 6, 2016

@mwilliamson
Contributor

Thanks for the kind words. One question I had was how much you'd imagine trusting the query builder? For my implementation, I was planning on putting the responsibility of correctness onto the queries (rather than having the GraphQL implementation check). The result is that, unlike the normal implementations of GraphQL, it's possible to implement something that doesn't conform to the GraphQL spec.

syrusakbary

syrusakbary commented on Sep 7, 2016

@syrusakbary
Member

I'm working in the query builder concept. As of right now the benchmarks shows about 4x improvement when returning large datasets.

Related PR in graphql-core: graphql-python/graphql-core#74

syrusakbary

syrusakbary commented on Sep 8, 2016

@syrusakbary
Member

Some updates!
I've been working non-stop on keep improving the performance with the Query Builder.

Benchmarks

Retrieving 10k ObjectTypes

Doing something similar to the following query where allContainers type is a [ObjectType] and x is a Integer:

{
  allContainers {
    x
  }
}

Retrieving a List with 10k Ints

Doing something similar to the following query where allInts type is a [Integer]

{
  allInts
}

NOTE: Just serializing a plain list using GraphQLInt.serialize takes about 8ms, so the gains are better compared substracting this amount from the totals: 4ms vs 22ms

Conclusion

The work I'm doing so far is being a demonstration the code performance still have margins to improve while preserving fully compatibility with GraphQL syntax.

The proof of concept speedup goes between 5x and 15x while maintaining the syntax and features GraphQL have. Still a lot of work to do there, but it's a first approach that will let us discover new paths for speed improvement.

Extra

I think by using Cython for some critical instructions we can gain about another 10-20x in speed.

Transport

Apart of using Cython I'm thinking how we can plug multiple kind of transports into GraphQL.
So instead of creating Python Objects each time we are accessing a field, and then transforming the result to JSON, another approach could be transform the values directly into JSON or whatever transport we are using.

This way the result could be created directly in the output format. This way we can plug other transports like binary (CapN Proto/FlatBuffers/Thrift/others), msgpack or any other thing we could think of.

mwilliamson

mwilliamson commented on Sep 22, 2016

@mwilliamson
Contributor

Thanks for working on this. I've taken a look at the proof of concept you wrote, but it's not clear to me exactly how it behaves, and how it's saving time versus the existing implementation. It seems like it's still resolving all fields of objects in the response, but I could easily have misread.

I adjusted my proof of concept to (optionally) integrate with GraphQL properly. This means that you can do things like generating the schema, introspect, and the all other stuff that GraphQL does, but it means you hit the performance penalty again. It seems to me that the easiest way of fixing this for my use case would be a way to prevent resolution from descending into the object that my proof of concept produces -- a way of returning a value from resolve functions that doesn't trigger resolution on any fields (since they're already resolved).

Perhaps something like:

def resolve_users(...):
    ...
    return FullyResolvedValue(users)

where users is already fully resolved by inspecting the AST or whatever. Alternatively, a decorator on the function itself might be clearer.

This shifts more responsibility onto the calling code to make sure that the returned value is of the correct shape in order to ensure it's still a valid GraphQL implementation, but that's definitely a good trade-off for me.

81 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @Kobold@cancan101@vade@jedie@syrusakbary

        Issue actions

          Performance issues with large data sets · Issue #268 · graphql-python/graphene