1. Oct
    23

    Cancellable apollo-client queries

    Posted in : apollo-client and typescript

    If you ever had to work with a GraphQL API featuring a few slow queries, you’ll probably understand the need to be able to cancel apollo-client queries. I’m using apollo-client 3.2.5, the latest at the time of writing. Since apollo-client uses fetch, my first attempt to cancel a query was using abortController:

    interface CancellablePromise<T> extends Promise<T> { cancel: () => void }
    
    export function cancellableQuery(options: QueryOptions<any, any>): CancellablePromise<ApolloQueryResult<any>> {
      const abortController = new AbortController()
      let promise = apolloClient.query({
        ...options,
        context: {
          fetchOptions: {signal: abortController.signal}
        }
      }) as CancellablePromise<ApolloQueryResult<any>>
      promise.cancel = () => abortController.abort()
      return promise
    }

    It partially works: the request is cancelled, and the promise doesn’t complete. However there’s a big issue: subsequent calls of the same query with the same variables don’t work anymore, no attempt is being made. You can follow this pull request since it might one day be resolved: handle external fetch abort.

    Not discouraged, I tried another method using watchQuery and the unsubscribe method:

    interface CancellablePromise<T> extends Promise<T> { cancel: () => void }
    
    export function cancellableQuery(options: QueryOptions<any, any>): CancellablePromise<ApolloQueryResult<any>> {
      const observable = apolloClient.watchQuery(options)
      let subscription: ZenObservable.Subscription
      let promise = new Promise((resolve, reject) => {
        subscription = observable.subscribe(
          (res) => resolve(res),
          () => reject()
        )
      }) as CancellablePromise<ApolloQueryResult<any>>
      promise.cancel = () => subscription.unsubscribe()
      return promise
    }

    The logic behind it is that the call to unsubscribe should cancel the request when the browser supports it. However, that just doesn’t happen for me using Google Chrome (version 86.0.4240.111), the promise doesn’t complete however the request isn’t cancelled and subsequent calls of the same query with the same variables don’t work. It seems like I’m not the only one to have noticed, you can track the progress on this issue in the hope that it get fixed at some point: Unsubscribing from a query does not abort network requests.

    You can find a few more opened issues related to cancelling apollo-client requests:

    After searching more about it, I found out about queryDeduplication, it’s apparently the reason why subsequent calls to cancelled queries don’t work anymore: the request is still considered to be “in-flight” and thus no further attempt is being made. Thus our code can become:

    interface CancellablePromise<T> extends Promise<T> { cancel: () => void }
    
    export function cancellableQuery(options: QueryOptions<any, any>): CancellablePromise<ApolloQueryResult<any>> {
      const abortController = new AbortController()
      let promise = apolloClient.query({
        ...options,
        context: {
          fetchOptions: {signal: abortController.signal},
          queryDeduplication: false
        }
      }) as CancellablePromise<ApolloQueryResult<any>>
      promise.cancel = () => abortController.abort()
      return promise
    }

    This time it finally works: the request is cancelled, the promise doesn’t complete, the next call of the same query with the same variables can complete successfully and subsequent calls will be retrieved from cache.

    It’s also possible to combine watchQuery, abortController and queryDeduplication disabled:

    export function cancellableQuery(options: QueryOptions<any, any>): CancellablePromise<ApolloQueryResult<any>> {
      const abortController = new AbortController()
      const observable = apolloClient.watchQuery({
        ...options,
        context: {
          fetchOptions: {
            signal: abortController.signal
          },
          queryDeduplication: false
        }
      })
      let subscription: ZenObservable.Subscription
      let promise = new Promise((resolve, reject) => {
        subscription = observable.subscribe(
          (res) => resolve(res),
          () => reject()
        )
      }) as CancellablePromise<ApolloQueryResult<any>>
      promise.cancel = () => {
        abortController.abort()
        subscription.unsubscribe()
      }
      return promise
    }

    The result is the same as the previous version, with no further benefit that I could see. Finally, I also tried to change the fetchPolicy to network-only but as far as I could see it only prevents the completed queries from being retrieved from cache, which isn’t something which I need.

    Comments

  2. Mar
    12

    Better network performance in Docker/Test Kitchen with Virtualbox on Mac

    Posted in : virtualbox, docker, test-kitchen, and mac

    If you’ve experienced really slow downloads from within VirtualBox on Mac, chances are you’re using the default NIC for your NAT interface.

    I’ve seen docker’s pull taking ages when a layer size is more than a couple MB and chef installer taking more than 10mn to download the package…

    Here’s how to fix it in docker-machine and test-kitchen.

    For docker-machine

    Check what’s your docker-machine VM’s name:

    docker-machine ls

    it will give you something like the following (could be different depending of your config):

    NAME           ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
    default        -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.5

    if it’s Running, then stop it first:

    docker-machine stop default

    then change the NIC to use the PCnet-FAST III (Am79C973) instead of the default Intel PRO/1000 MT Desktop (82540EM):

    VBoxManage modifyvm "default" --nictype1 "Am79C973"

    And finally start it, it will now use the new NIC with “hopefully” improved speed:

    docker-machine start default

    For test-kitchen

    Update your kitchen.yml or kitchen.local.yml to have your vagrant driver include the customize config below:

    driver:
      name: vagrant
      customize:
        nictype1: "Am79C973"

    Enjoy using all your available bandwidth!!

    Comments

  3. Nov
    18

    Senlima - Knight Animation

    Posted in : senlima, game, preview, unity, 3d, animation, and knight

    This article has been moved over our dedicated Openhood Games blog.

    Comments

  4. Nov
    17

    Senlima - Upcoming Game Preview

    Posted in : senlima, game, preview, unity, and 3d

    This article has been moved over our dedicated Openhood Games blog.

    Comments

  5. Jun
    28

    NGinx Useful Tips

    Posted in : nginx and sysadmin

    During an epic debugging session with an NGinx configuration for a project, I discovered some useful, but not so common (at least to me) configuration.

    Debug

    NGinx do not provide so much help (by default) when it comes to debugging internal redirect, proxying and other rewrite rules. But it comes with a handy debug module which allows you to get a lot more info.

    You have to enable it at compile time:

    ./configure --with-debug

    And then in your configuration you can set:

    http {
      # At the http level activate debug log for eveything
      error_log /path/to/my/detailed_error_log debug;
    
      server {
        # At the server level activate debug log only for this server
        error_log /path/to/my/detailed_error_log debug;
      }
    
      server {
        # At the server level without the debug keywords it disable debug for this server
        error_log /path/to/my/error_log;
      }
    }

    You can even debug only some connections:

    error_log /path/to/my/detailed_error_log debug;
    events {
        debug_connection   10.0.0.1;
        debug_connection   10.0.1.0/24;
    }

    Source: NGinx Debugging Log

    Proxying

    NGinx is well known for its proxy/reverse-proxy/caching-proxy capabilities, but you’d better know how some things works to not waste your time on some odd behaviors.

    When proxying to remote host by URLs, be aware that NGinx use its own internal resolver for DNS name. This means in some cases it can’t resolve domains unless you specify which DNS to use.

    Let’s take an example:

    server {
      # Let's match everything which starts with /remote_download/
      location ~* ^/remote_download/(.*) {
        # but only when coming from internal request (proxy, rewrite, ...)
        internal;
    
        # Set the URI using the matched location
        set $remote_download_uri $1;
    
        # Set the host to use in request proxied (useful if remote is using vhost
        # but you're using its IP address to reach it in the proxy_pass config)
        set $remote_download_host download.mydomain.tld;
    
        # Set the url we want to proxy to
        # Using IP address of server, be sure to set the $remote_download_host
        set $remote_download_url https://10.0.0.1/$remote_download_uri;
        # Or using the full domain
        # set $remote_download_url https://$remote_download_host/$remote_download_uri;
    
        # Set Host header for vhost to work
        proxy_set_header Host $download_host;
    
        # This clears the Authorization
        proxy_set_header Authorization '';
        # If your remote server needs some auth you can set it there
        # Basic auth would be something like
        # proxy_set_header Authorization 'Basic kjslkjsalkdjaslasdoiejldfkj=';
    
        # Disable local file caching, when serving file
        proxy_max_temp_file_size 0;
    
        # Finally send query to remote and response back to client
        proxy_pass $download_url;
      }
    
      try_files $uri @fallback;
    
      location @fallback {
        proxy_pass http://my_backend;
      }
    }

    Example adapted from Nginx-Fu: X-Accel-Redirect From Remote Servers

    This example is fully working because we used an IP address for $remote_download_url, but if we were using the domain (eg: download.mydomain.tld), any request would fail with a 502 Bad Gateway error.

    This is due to the way NGinx default resolver works. It’s smart enough to resolve the domains in proxy_pass directives as long as they are statics (it can get them at boot time) and they are in /etc/hosts. But as we are constructing the URL here, it does not try to resolve it. Fortunately you can specify which DNS server it should use in such cases by setting:

    http {
      # Globally
      resolver 127.0.0.1; # Local DNS
    
      server {
        # By server
        resolver 8.8.8.8; # Google DNS
    
        location /demo {
          # Or even at location level
          resolver 208.67.222.222; # OpenDNS
        }
      }
    }

    Comments