Easily restore your project to a previous version with our new Instant One-click Backup Recovery

GraphQL Schema Stitching and Enhancing Content APIs

Have you heard about schema stitching yet? Wonder what it is, when to use it or why? This is the post to get you started.
Jesse Martin

Jesse Martin

Jun 05, 2019
Schema Stitching with a Headless CMS

UPDATE: While not entirely the same thing, there’s a new kid on the block for composing schemas called federation. We’ll be dropping our take on that in a few weeks. For now, enjoy the content below for a deep-dive into the land of GraphQL schemas.

GraphQL schema stitching is an excellent way to enhance APIs with a wealth of extra data. The concept is simple and the execution is straight forward. How it happens, however, is anything but. What lies beneath is a complex game of delayed requests, resolver assignments and more – but thanks to helpful folks at Apollo, we can let the tooling do the heavy lifting and give us the benefits of schema stitching with minimum effort.

#The High-Level Take on GraphQL Stitching

Schema Stitching is the process of combining multiple schemas from various APIs into a single Schema / API. GraphQL is a fantastic technology that allows the server to resolve our data across tables via a clean, query syntax, but what happens when those tables are actually different databases all together living on different servers?

Wouldn’t it be great if we could resolve a list of hotels in one database with a list of regional activities in another through a single query? That’s what schema stitching allows us to do.

Schema stitching follows a four-step process:

  1. Introspect the remote APIs. (Finding out what schema structure you have to work with.)
  2. Handle type name collisions.
  3. Associate which fields get added to which types.
  4. Resolve the data.

#When to Ditch the Stitch

It's not always a good idea to stitch your schemas together. Here are some reasons why you might not want to stitch your schemas.

  1. The endpoints are not versioned or reliable and might change on you without proper notice.
  2. One endpoint for all your data also means one endpoint to take down the project.
  3. Difference in TTL for your data.
  4. Performance is a critical factor, you can optimize REST for better performance in server-to-server communication.

With the gotchas in mind, stitching is a great way to combine multiple data sets into a single distributable, explorable and maintainable API. Particularly in the API architecture for MVP projects or one-off sites, it's a great way to get the developers up and running, fast while staying in the GraphQL eco-system and not having the overhead of technical context switching.

#Let us Begin

Our goal today is to combine three different APIs. We will combine a Geocode Weather API, Yelp and a content API powered by Hygraph.

Requirements

Outcomes

Our data desires for the APIs are as follows:

  • Hygraph

    • Resource: Contains a list of conferences as well as their location
    • Shape: There are a number of fields present that pertain to Conferences such as date, Call-for-Paper deadlines, etc. Critical for our needs are the city, country and start date fields on the Conference type.
  • Geocode Weather (GeocodeQL)

    • Resource: Allows us to do a reverse lookup from the string location info and resolve that against current and historical weather data. If you are unfamiliar with reverse lookup, it is similar to a vlookup in Excel. You give a known string value for a place, and it looks for any known latitudinal (Lat) and longitudinal (Lon) coordinates that correspond to that location. Basically, you give the value “Seattle, United States” and it gives you a coordinate in the pattern of x.xxxxx, x.xxxxx.
    • Shape: The primary field we are concerned about here is “location” which takes a “place” argument, which will be our string value. Internally it will resolve fields such as Lat, Lon, Weather, Moon Phase, etc.
  • Yelp

    • Resource: Also contains a location field that resolves with fields such as nearby restaurants, hotels, etc.
    • Shape: The type we are concerned with here is called Businesses.

Steps

Now that you have all the requirements handled and we understand the goal of this tutorial, let’s begin. If you’d like to skip to the end, clone this repo to get started even faster.

Step 0: Build Tooling

The first step is to get our project directory created and ready to go. To begin, run the following command at the root directory where you’d like to create a new project. We will be using yarn but you can use whichever library you would like. yarn init Follow the prompts to get your package.json file configured.

Since we will want to work with modern javascript, we are going to utilize some ES6 compile scripts with babel.

Create a new file called .babelrc, note the leading period, it’s important. You’re .babelrc file should contain the following content: ```js { "presets": ["@babel/preset-env"] } ```` You’ll need to install this dependency as well, along with a few other Babel dependencies that we’ll get to in a minute, so we’ll add them now.

yarn add -D \
@babel/core \
@babel/cli \
@babel/preset-env \
@babel/node \
@babel/register
yarn add @babel/polyfill

To utilise babel, we need to set up some development scripts, inside of package.json, add or replace a property called “scripts” with the following code:

"scripts": {
"clean": "rm -rf build && mkdir build",
"build-babel": "babel -d ./build ./src -s",
"build": "npm run clean && npm run build-babel",
"start": "npm run build && node ./build/app.js",
"dev": "./node_modules/.bin/nodemon --exec babel-node ./src/app.js"
},

This will let us create builds of our server code that run on any platform that supports node.js and it creates a development script for us that will restart the server when our code has changed. Notice we’ve added another dependency, nodemon, so let’s add that with the following command:

yarn add -D nodemon

Create a new folder at the root of your folder called src and navigate inside. Create two files as indicated below.

src/
|-app.js
|-index.js

Inside of app.js we will require our ES6 code which allows Babel to transpile the code into ES5 syntax. Open app.js and include the following code:

import "babel-core/register";
import "babel-polyfill";
import "./index.js";

That’s it for this file, we won’t be revisiting it again. Its entire job is to tell Babel to work its Babel magic on the index imported at the bottom.

Step 1: Create the Server

We will begin with a server. There’s a large number of server libraries out there, but since we will be working with other Apollo libraries and because Apollo resolves web requests to a GraphiQL interface for us for free, we’ll use that to begin with.

import { ApolloServer } from 'apollo-server';
// Server Function
async function run(){
// Server Code - the end.
const server = new ApolloServer();
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

This code won’t do anything yet though since we have no Schema to pass to the ApolloServer.

Step 2: Credentials

Next, we’ll define our endpoints and our credentials. We’ll be using the library dotenv to read our secret auth tokens and authorize our requests against the Yelp API.

yarn add dotenv

At the root of our project (not in src) we’ll create a new file called .env

You’ll need to get an auth token to use with the YELP API, you can follow the instructions here to get one.

Inside of the .env file, add the following content, being sure to replace “YOUR_YELP_TOKEN" with the one you get from the process above.

YELP_TOKEN=YOUR_YELP_TOKEN

This would be a good time to ensure we don’t push our credentials or other cruft code to our repository, let’s create a new file (at the root) called .gitignore which will specify a name or pattern of names to avoid publishing. Add the following code to the .gitignore file.

node_modules/
.env
build/

Now, inside of our index.js file, we’ll add a line of code requiring our credentials and making them available for later use. At the top of the file, add the following line: require('dotenv').config()

Additionally, we’ll add some constants for our API strings.

const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';

Note, that if you are not running your weather API locally, you’ll need to add a new URL for wherever you have that server running.

Our index.js file should now look like this following:

require("dotenv").config()
import { ApolloServer } from 'apollo-server';
// Our APIs
const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';
// Server Function
async function run(){
// Server Code - the end.
const server = new ApolloServer({ schema });
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

Step 3: Inspecting our APIs

As mentioned above, a critical step in API stitching is inspecting the types and fields that are available at the various endpoints. This makes the API available for transforming with additional Apollo tooling. To begin, we’ll create a helper function that will let us inspect the remote APIs and make them available for processing.

/* First we need to fetch our remote APIs,
inspect their content and then apply the use
Apollo to merge their schemas. */
const createRemoteSchema = async (uri,settings) => {
const config = {uri: uri, fetch, ...settings}
try {
const link = new HttpLink(config);
// Introspection is what gives us
//the self documenting magic of GraphQL
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link,
});
} catch (error) {
console.log(error)
}
};

If you’re reading carefully, you’ll note two new dependencies called HttpLink and fetch - let’s add them now.

yarn add apollo-link-http node-fetch

We’ll import them below our ApolloServer imports.

import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';

Next, we’ll use the helper function to make those APIs available. In the Yelp API, we’ll pass our credentials from our .env file. We read that from the process.env global (to our server, this won’t be available to our front-end).

// Process the APIs
const remoteWeatherAPI = await createRemoteSchema(WEATHER_API)
const myRemoteAPI = await createRemoteSchema(MY_API)
const remoteYelp = await createRemoteSchema(YELP_API, {
credentials: "include",
headers: {
"Authorization": `Bearer ${process.env.YELP_TOKEN}`
}
})

Step 4: Handle Conflicts

Since we are combining multiple APIs that are likely maintained by different teams if not companies, we need check for name collisions. Apollo’s default behaviour is simply to take the last API in as the final definition in the case of a naming conflict. We can account for those conflicts by transforming the API before we merge them in our final step.

We’ll be transforming both our own content API as well as the Yelp API since they each have a naming collision with the Location type.

/* Here I rename some more collisions around the name 'location'
- but I also remove all non query operations just to keep things
cleaner. We see those from our Hygraph API. */
const myTransformedAPI = transformSchema(myRemoteAPI, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `GCMS_${name}` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `GCMS_${name}` : name),
]);
const yelpTransformedAPI = transformSchema(remoteYelp, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `Place` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `place` : name),
]);

In the first example, I simply create a prefix for the type, though it’s possible to completely rename the type as is the case for the Yelp schema. We need to handle collisions on both the type name as well as the root field query.

Again, we’ve added a new dependency.

yarn add graphql-tools

Let’s import them below our Apollo imports. We’re including a few more helpers as well that will be used in our next steps.

import {
makeRemoteExecutableSchema,
mergeSchemas,
transformSchema,
FilterRootFields,
RenameTypes,
RenameRootFields,
introspectSchema
} from 'graphql-tools';

Step 5: Create Relationships

The final phase before merging the schemas is to create a new schema that connects our different schemas together. We’ll add this block, written in the Schema Definition Language (SDL)

/* This is an important step, it lets us tell the schema
which fields should be connected between the schemas. */
const linkTypeDefs = `
extend type Conference {
location: Location
}
extend type Location {
hotels: Businesses
food: Businesses
}
`;

Here we’ve told our Conference type coming from Hygraph to add a new location field that resolves to the Location type from our Weather API. We’ve told the Location type to include a hotels and food field that connects to Yelp’s Businesses Type.

Step 6: Merge the Schemas

The moment has come, we finally merge the schemas together with the following method.

const schema = mergeSchemas({
schemas: [...],
resolvers: {...}
}

The schema property is simply a list of our final transformed APIs.

// Merge these Schemas
schemas: [
remoteWeatherAPI,
myTransformedAPI,
yelpTransformedAPI,
linkTypeDefs,
],

The resolver is a list of Types with their fields and the logic to be applied when resolving the data. Let’s break down one of them.

resolvers: {
// Which type gets the new fields
Conference: {
// Which field (defined in our linkTypeDefs)
location: {
// What's the 'value' we will pass
// in from our existing Schema,
// effectively a subquery.
fragment:
`... on Conference { city, country, startDate }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteWeatherAPI,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'location',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
place: `${response.city}, ${response.country}`,
date: `${response.startDate}`
},
context,
info,
transforms: myTransformedAPI.transforms
});
},
},
},
}

So, our type Conference (from GCMS) has the field location as defined in our linkTypeDefs.

Field location is running essentially a subquery for city, country and startDate from Conference, and passing that data into a resolver as the response. The resolve method takes the response, pulls the variables it needs off the response object and passes them as args to the schema it delegates for this data, our remoteWeatherAPI schema. It defines the operation, a query which could also have been a mutation, and which field on the root Query type to check for the response data. It then merges in the remaining info of the schema, context, info and the transforms we defined above.

Adding the remainder of our resolvers, along with the rest of our code and we have a complete server delivering a stitched schema for our consumption.

require("dotenv").config()
import { ApolloServer } from 'apollo-server';
import { HttpLink } from 'apollo-link-http';
import {
makeRemoteExecutableSchema,
mergeSchemas,
transformSchema,
FilterRootFields,
RenameTypes,
RenameRootFields,
introspectSchema
} from 'graphql-tools';
import fetch from 'node-fetch';
// Our APIs
const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';
// Server Function
async function run(){
/* First we need to fetch our remote APIs,
inspect their content and then apply the use
Apollo to merge their schemas. */
const createRemoteSchema = async (uri,settings) => {
const config = {uri: uri, fetch, ...settings}
try {
const link = new HttpLink(config);
// Introspection is what gives us
//the self documenting magic of GraphQL
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link,
});
} catch (error) {
console.log(error)
}
};
// Process the APIs
const remoteWeatherAPI = await createRemoteSchema(WEATHER_API)
const myRemoteAPI = await createRemoteSchema(MY_API)
const remoteYelp = await createRemoteSchema(YELP_API, {
credentials: "include",
headers: {
"Authorization": `Bearer ${process.env.YELP_TOKEN}`
}
})
/* Here I rename some more collisions around the name 'location'
- but I also remove all non query operations just to keep things
cleaner. We see those from our Hygraph API. */
const myTransformedAPI = transformSchema(myRemoteAPI, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `GCMS_${name}` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `GCMS_${name}` : name),
]);
const yelpTransformedAPI = transformSchema(remoteYelp, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `Place` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `place` : name),
]);
/* This is an important step, it lets us tell the schema
which fields should be connected between the schemas. */
const linkTypeDefs = `
extend type Conference {
location: Location
}
extend type Location {
hotels: Businesses
food: Businesses
}
`;
/* Finally we merge the schemas but also add the resolvers
which tells GraphQL how to resolve our newly added fields. */
const schema = mergeSchemas({
// Merge these Schemas
schemas: [
remoteWeatherAPI,
myTransformedAPI,
yelpTransformedAPI,
linkTypeDefs,
],
// Resolve them here
resolvers: {
// Which type gets the new fields
Conference: {
// Which field
location: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country, startDate }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteWeatherAPI,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'location',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
place: `${response.city}, ${response.country}`,
date: `${response.startDate}`
},
context,
info,
transforms: myTransformedAPI.transforms
});
},
},
},
Location: {
// Which field
hotels: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteYelp,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'search',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
location: `${response.city}, ${response.country}`,
term: "Hotels"
},
context,
info,
transforms: yelpTransformedAPI.transforms
});
},
},
food: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteYelp,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'search',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
location: `${response.city}, ${response.country}`,
term: "Burgers"
},
context,
info,
// transforms: myTransformedAPI.transforms
});
},
}
}
}
});
// Server Code - the end.
const server = new ApolloServer({ schema });
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

#Closing

If you’ve tracked with all of that, good for you! There was a lot of ground covered, but hopefully, you found it helpful.

Again, not every situation, and perhaps even most situations, won’t need schema stitching. It is, however, a powerful tool in the GraphQL toolbox for the situation where it is needed, and Apollo makes it very straight forward to work with. Using tools like schema stitching allows you to combine features like a static content hub in Hygraph with real-time data like the weather.

As always, if you found something helpful or an error that needs to be corrected, let us know in the community Slack! We’re always hanging around. Thanks for reading!

Blog Author

Jesse Martin

Jesse Martin