SOLID gold

If there is one thing I have learnt during my career in the IT industry, it’s that the industry is a fickle beast. Trends and fashions come and go. Languages fall by the wayside(hey COBOL74!). How often have you read an article declaring a new framework a “game changer”, only to realise that after using it in anger it does a fraction of what a venerable equivalent does in it’s sleep?

In this article I’m going to cover something that has not changed and has not gone out of fashion. It crops up again and again.

If there’s one thing you need to learn and more importantly USE, as a software engineer it is encapulated(see what I did there?) in these 5 principles. But hey, enough of my yakkin’, whaddaya say? Let’s boogie!

The SOLID principles are a set of five design guidelines in object-oriented software development that help engineers create systems that are easy to maintain, scale, and understand. Introduced by Robert C. Martin, these principles aim to reduce “code rot” and make software more robust.


1. Single Responsibility Principle (SRP)

“A class should have one, and only one, reason to change.”

This principle states that a component should perform a single function. When a class handles multiple unrelated tasks, it becomes fragile. A change in one task might accidentally break another. You might be tempted to add a small related function, but don’t do it. Do what is right and create a new class even if it has one function. Smaller classes are great. Less dependencies, easier to test. What’s not to like?

  • Example: Imagine a User class that handles both user data and saving that data to a database. If you change your database schema, you have to modify the User class.
  • Better Approach: Create a User class for data and a UserRepository class for database operations.

2. Open/Closed Principle (OCP)

“Software entities should be open for extension, but closed for modification.”

This somewhat opaquely named principle states that you should be able to add new functionality to a system without changing existing code. This prevents bugs from being introduced into parts of the application that are already working. It comes down to my tenet of minimal code change. Remember every code change has possibility to introduce bugs!

  • Example: A Discount class that uses a series of if/else statements to check for “VIP” or “Seasonal” discounts. Adding a new discount type requires changing the existing logic.
  • Better Approach: Use an interface or abstract class DiscountStrategy. Each new discount type becomes a new class that implements this interface.

3. Liskov Substitution Principle (LSP)

“Subtypes must be substitutable for their base types.”

Barbara Liskov is a pioneer who fundamentally changed how we write and organize code. Before her work in the 1970s, code was often a messy “spaghetti” of instructions. Liskov pioneered the concept of Data Abstraction. She led the team that created CLU, a programming language that introduced the idea of “abstract data types”—the direct ancestor of the “Classes” and “Objects” we use in almost every modern language like Java, Python, and C++. I hope you enjoyed that little history lesson. Let’s proceed.

This principle states that if a program is using a base class, it should be able to use any of its subclasses without knowing it or causing errors. The subclass must honor the “contract” of the parent class.

  • Example: A classic violation is the “Square-Rectangle” problem. If a Square inherits from Rectangle but throws an error when the height and width are set to different values, it breaks the program’s expectations.
  • Better Approach: If a subclass cannot perform the actions of the parent in the same way, they likely shouldn’t share that specific inheritance hierarchy.

4. Interface Segregation Principle (ISP)

“Clients should not be forced to depend on methods they do not use.”

I’ve seen this many times! You have to implement an interface in order to use a specific API call. You do this but realise you have to implement functions you are not interested in, leading to the dreaded “not implemented” comment. This can be partly remedied by using the Adapter Pattern by the way if you come across it.

It is better to have many small, specific interfaces than one large, “fat” interface. This prevents implementing classes from being burdened with “dummy” methods that do nothing.

  • Example: An IMachine interface with Print(), Scan(), and Fax(). A basic Printer class would be forced to implement Scan() and Fax() even if it can’t perform those actions.
  • Better Approach: Break the interface into IPrinter, IScanner, and IFax machine.

5. Dependency Inversion Principle (DIP)

“Depend on abstractions, not concretions.”

High-level modules (the logic) should not depend on low-level modules (the tools). Both should depend on abstractions (interfaces). This “decouples” the code, making it easy to swap out components.

This is great for writing tests, and you should be writing tests, many, many tests!! It allows you to easily mock the dependencies.

  • Example: A NotificationService that directly creates an instance of EmailSender. If you want to switch to SMSSender, you have to rewrite the NotificationService.
  • Better Approach: The NotificationService should depend on an IMessageSender interface. You can then “inject” whichever sender you need at runtime.

Conclusion

At the end of the day, SOLID is about managing change. Requirements shift, APIs evolve, and businesses pivot.

By following these five principles, you aren’t just writing code for today; you’re leaving a map for the developer who has to touch this file six months from now. It turns software from a fragile house of cards into a robust, modular system.

Before I go, here is a test. Write some code. Store it away for a year. Look at your code. Is it still readable, understandable. Is is SOLID?

GraphQL Part 3 – Persistence with MongoDB

In our post, we mastered Mutations. We can now query, add, update, and delete films from our Hammer collection. However, every time we restart our Apollo server, our changes vanish into the ether. Our “Watched” list resets, and that film we deleted? It’s back from the dead—and not in a cool, technicolor, cinematic way.

To fix this, we need Data Persistence. In this post, we’ll swap our humble, local JavaScript array for a MongoDB database.

If you haven’t done so already, clone the lab Github repository using

git clone https://github.com/jmwollny/lab.git

Install MongoDB

I’m installing on a Mac, if you what to install MongoDB on other systems go to the MongoDB download page here.

brew tap mongodb/brew
brew install mongodb-community

Now start the MongoDB server. This command ensures that the MongoDB server will restart at logon.

brew services start mongodb/brew/mongodb-community

Now check we have a running instance. Type mongosh. If the shell appears you are golden and are ready to proceed to the next section. Type exit to leave the shell.

Setting Up the MongoDB connection

First, we need to install the MongoDB driver. In your terminal, run:

cd lab/graphql-tutorial-3
npm install mongoose

Mongoose is an Object Data Modeling (ODM) library that makes talking to MongoDB from Node.js much easier. If you open index.js you will see that the films array has been replaced with a MongoDB connection to a database called hammer_films.

const mongoose = require('mongoose');

// Connect to your local or Atlas MongoDB instance
mongoose.connect('mongodb://localhost:27017/hammer_films', {
  useNewUrlParser: true,
  useUnifiedTopology: true
});

const db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', () => console.log('Connected to MongoDB!'));

Defining the Data Model

In GraphQL, we have a Schema. In MongoDB (via Mongoose), we have a Model. These two need to mirror each other so our data flows correctly. A new file called Film.js contains the MongoDB model which has been exported so it can be shared by seed.js(more about this later!).

const mongoose = require('mongoose');

const filmSchema = new mongoose.Schema({
  title: { type: String, required: true },
  year: { type: Number, required: true },
  watched: { type: Boolean, default: false }
});

// Export the model so both index.js and seed.js can use it
module.exports = mongoose.model('Film', filmSchema);

Updating the Resolvers

This is where the magic happens. Instead of using .find() or .splice() on a local array, we will use Mongoose methods which return Promises. GraphQL handles these asynchronous calls automatically.

The New Queries and mutations

const resolvers = {
  Query: {
    films: async (parent, args) => {
      // 1. Build a dynamic query object
      let query = {};

      // Watch filter
      if (args.watched !== undefined) {
        query.watched = args.watched;
      }

      // Year filter (Exact match)
      if (args.year) {
        query.year = args.year;
      }

      // Date range filter (using MongoDB operators $gte and $lte)
      if (args.where) {
        query.year = query.year || {}; // Initialize year object if it doesn't exist
        if (args.where.year_gte) {
          query.year.$gte = args.where.year_gte;
        }
        if (args.where.year_lte) {
          query.year.$lte = args.where.year_lte;
        }
      }

      // Search filter (using Regex for case-insensitive partial match)
      if (args.searchTerm) {
        query.title = { $regex: args.searchTerm, $options: 'i' };
      }

      // Execute the query against the database
      return await FilmModel.find(query);
    },

    // Find by ID - Mongoose maps GraphQL 'id' to MongoDB '_id' automatically
    film: async (parent, args) => await FilmModel.findById(args.id),
  },

  Mutation: {
    addFilm: async (parent, { input }) => {
      // Create a new instance and save it
      const newFilm = new FilmModel(input);
      return await newFilm.save();
    },

    updateWatched: async (parent, { id, watched }) => {
      const updatedFilm = await FilmModel.findByIdAndUpdate(
        id,
        { watched },
        { new: true }, // This flag returns the record *after* it was updated
      );

      if (!updatedFilm) {
        throw new Error('Film not found');
      }

      return updatedFilm;
    },

    deleteFilm: async (parent, { id }) => {
      const deleted = await FilmModel.findByIdAndDelete(id);
      if (!deleted) {
        throw new Error('Film not found');
      }

      return await FilmModel.find();
    },
  },
};

Testing Persistence

Restart your server with node index.js. Now, head back to your GraphQL sandbox at http://localhost:4000/. We can test that after adding a film and restarting our Apollo server, the film still exists!

In the sandbox run a mutation to add a new film.

mutation CreateFilm($input: CreateFilmInput!) {
  addFilm(input: $input) {
    id
    title
    year
    watched
  }
}

Remember to add the variables JSON.

{
  "input": {
    "title": "The Brides of Dracula",
    "year": 1960,
    "watched": false
  }
}

Run the query then shut down your server (Ctrl + C in the terminal). Start the server again usig node index.js

Run a query to retrieve all films.

query GetAllFilms {
  films {
    id
    title
    watched
    year
  }
}

Query result

{
  "data": {
    "films": [
      {
        "id": "69dfb8e8067cfb4bcaadeb6d",
        "title": "The Brides of Dracula",
        "watched": false,
        "year": 1960
      }
    ]
  }
}

If all has gone well, your data is still there! Unlike our local array, MongoDB has written this data to the disk.

Why use Mongoose with GraphQL?

You might notice that our FilmModel and our GraphQL type Film look very similar. This redundancy is actually a strength. The GraphQL Schema acts as a contract for your frontend (telling it what data it can ask for), while the Mongoose Model acts as a gatekeeper for your database (telling it how the data must be stored).

The “ID” Gotcha

MongoDB uses a field called _id by default. GraphQL usually expects id. Mongoose is smart enough to provide a virtual id field that maps to _id, so the existing queries like film(id: "...") continue to work without a hitch.

Importing the full list of films

Let’s finish by importing our film list into MongoDB, then we can get down to the fun job of watching every one and marking them as watched as we go.

To do this I have provided a handy script. Running the script will clear the database and import all films. All you need to do is open a terminal and run node seed.js.

node seed.js
Connected to MongoDB for seeding...
Old records removed.
157 Hammer films successfully added to the database!

Let’s have a look at the script.

const mongoose = require('mongoose');
const fs = require('fs');
// Import your Mongoose model
const Film = require('./models/Film'); 

const seedDatabase = async () => {
  try {
    // Connect to MongoDB
    await mongoose.connect('mongodb://127.0.0.1:27017/hammer_films');
    console.log("Connected to MongoDB for seeding...");

    // Read the JSON file
    const data = JSON.parse(fs.readFileSync('./films.json', 'utf-8'));

    // Clear existing films
    await Film.deleteMany({});
    console.log("Old records removed.");

    // Bulk insert the data
    await Film.insertMany(data);
    console.log(`${data.length} Hammer films successfully added to the database!`);

    // Close the connection
    process.exit();
  } catch (error) {
    console.error("Error seeding database:", error);
    process.exit(1);
  }
};

seedDatabase();

This script does the following:

  • Connects to MongoDB
  • Parses the list of films(note: we do not need the id anymore in the JSON)
  • Deletes all records in the database
  • Inserts all records defined in the JSON file

Conclusion

We’ve successfully moved our Hammer database from a “temporary” state to a “permanent” one. By integrating MongoDB, we’ve laid the groundwork for a real-world application. We are no longer just playing with variables in memory. we are managing a persistent data store. As an exercise try creating different queries or if you are feeling brave add more fields to the schema. Have fun coding and if you feel inclined watching one of the suggested films 🙂

GraphQL Part 2 – Mastering Mutations

In our last post, we built a robust way to search through 157 Hammer classics. But what happens when you finally sit down to watch The Brides of Dracula? You need a way to update that record.

In GraphQL, any operation that changes data is called a Mutation.

If you have not followed part 1 of this tutorial go there now to pull the code from my Github repository.

1. Updating the Schema

First, we need to tell our server what these changes look like. We’ll add a Mutation type to our typeDefs. We have one to update a film entry(updateWatched) and one to delete a film(deleteFilm).

type Mutation {
  # Toggle the watched status of a film
  updateWatched(id: ID!, watched: Boolean!): Film
  
  # Delete a film from our collection
  deleteFilm(id: ID!): [Film]
}

2. Writing the Resolvers

Now, we implement the logic to update and delete a film record. Since we’re working with a local array of films data we’ll use standard JavaScript array methods to find and modify our data. Here we are using splice and find.

const resolvers = {
  // ... query resolvers
  
  Mutation: {
    updateWatched: (parent, { id, watched }) => {
      const film = films.find(f => f.id == id);
      if (!film) {
        throw new Error("Film not found");
      }
      
      film.watched = watched;
      return film;
    },
    
    deleteFilm: (parent, { id }) => {
      const index = films.findIndex(f => f.id == id);
      if (index == -1) {
        throw new Error("Film not found");
      }
      
      // Remove the film and return the updated list
      films.splice(index, 1);
      return films;
    }
  }
};

3. Testing in the Playground

Once you restart your server, you can test these live.

node index.js
🚀 Server ready at http://localhost:4000/

Navigating to http://localhost:4000/ will redirect to the GraphQL sandbox. Click the ‘Query your server’ button and you will be presented with something like this.

First of all we need to find all unwatched Dracula films returning the id. Quick quiz! Do you remember how to craft the query? Here it is:

query GetNotWatched {
  films(watched: false, searchTerm: "dracula") {
    id
    title
  }
}

This will return the following.

{
  "data": {
    "films": [
      {
        "id": "70",
        "title": "The Brides of Dracula"
      },
      {
        "id": "102",
        "title": "Dracula: Prince of Darkness"
      },
      {
        "id": "115",
        "title": "Dracula Has Risen from the Grave"
      },
      {
        "id": "122",
        "title": "Scars of Dracula"
      },
      {
        "id": "125",
        "title": "Countess Dracula"
      }
    ]
  }
}

Pick a film and remember the ID. This will be used in the next step. In my case I will pick the first film which has an id of 70.

Mark “Brides of Dracula” (ID: 70) as watched:

Paste this query into the sandbox(remember to substitute your own id if it is different).

mutation {
  updateWatched(id: "70", watched: true) {
    title
    watched
  }
}

After running the update query run the GetNotWatched query again to check that the film is not in the list.

Removing a film

Let’s remove “Brides of Dracula”.

mutation {
  deleteFilm(id: "70") {
    title
  }
}

If we run a query to return the film with id 70 GraphQL will now return null.

query GetFilm {
  film(id: 70) {
    id
    watched
    year
  }
}

Results from the query

{
  "data": {
    "film": null
  }
}

Adding a film

Let’s add the film back!

When adding a record, passing four separate arguments (ID, Title, Year, Watched) can get messy. Instead, we define an input type in our schema to group them together.

Update the Schema

We create a new input which specifies which fields are mandatory when creating a new Film. In our case all fields must be specified(indicated by the “!”). This input spec is then specified in our addFilm mutation.

input CreateFilmInput {
  id: ID!
  title: String!
  year: Int!
  watched: Boolean!
}

type Mutation {
  # ... previous mutations
  addFilm(input: CreateFilmInput!): Film
}

Update the Resolver

const resolvers = {
  Mutation: {
    // ... updateWatched and deleteFilm
    
    addFilm: (parent, { input }) => {
      // Check if ID already exists to prevent duplicates
      const exists = films.find(f => f.id === input.id);
      if (exists) {
        throw new Error("A film with this ID already exists.");
      }

      const newFilm = { ...input };
      films.push(newFilm);
      return newFilm;
    }
  }
};

Testing the “Add” Mutation

Enter the query into the sandbox

mutation CreateNewHammerFilm($input: CreateFilmInput!) {
  addFilm(input: $input) {
    id
    title
    year
  }
}

Under the query box, enter the variables JSON.

{
  "input": {
    "id": "70",
    "title": "The Brides of Dracula",
    "year": 1960,
    "watched": false
  }
}

After running the mutation you can run the GetFilm query, which will show the resurrected film in all its glory!

query GetFilm {
  film(id: 70) {
    id
    watched
    year
  }
}

Why use input types?

Using an input object instead of flat arguments makes your API much more maintainable. If you decide to add a director or studio field later, you only have to update the input type, rather than changing the signature of the mutation across your entire codebase.

Why “Mutation” instead of “Query”?

While you could technically change data inside a Query resolver, it’s a major “no-go” in the GraphQL world. Using the Mutation keyword tells the server (and other developers) that this operation has side effects. It also ensures that if you send multiple mutations in one request, they run serially (one after another) to prevent data race conditions.

Conclusion

We’ve come a long way from a simple JavaScript array of my favourite films. By implementing Mutations, we’ve transformed our Hammer dataset into a functional API. We can now:

  • Create new entries to keep our database growing.
  • Update existing records to track our viewing progress.
  • Delete entries to keep our data clean and accurate.

This “CRUD” (Create, Read, Update, Delete) cycle is the backbone of almost every application you use daily. While we are currently managing this data in local memory via a simple array, the patterns we’ve used here—Input Types, Non-Nullable arguments, and Serial Mutation execution—are the exact same patterns you would use when connecting to a production-grade database like MongoDB or PostgreSQL.

What’s Next?

Now that the backend logic is solid, the next logical step is to explore Data Persistence. In the next post, we’ll look at how to hook this GraphQL server up to a database so that our “Watched” status doesn’t disappear every time we restart the server!

Until then, happy coding!

GraphQL Part 1 – A Modern Approach to APIs

For years, REST (Representational State Transfer) has been the standard for web services. However, as applications grow in complexity, developers often find themselves juggling dozens of endpoints and dealing with over-fetching data.

GraphQL is a query language for your API and a server-side runtime for executing those queries using a type system you define for your data. Instead of multiple “dumb” endpoints, GraphQL provides a single “smart” endpoint that can return exactly what the client asks for.


Why GraphQL?

  • No More Over-fetching: You get exactly the data you request—nothing more, nothing less.
  • Single Request, Multiple Resources: You can fetch data from different sources in one trip to the server.
  • Strongly Typed: GraphQL uses a schema to define what is possible, which acts as a contract between the frontend and backend.
  • Self-Documenting: Because of the schema, tools like GraphiQL allow you to browse the API structure effortlessly.

For this tutorial we will be working with a list of classic Hammer Studios films. Each film will have id, title, year and watched fields.

The code below shows example Schema and the Query definitions. The schema defines a Film as having four fields. Where the field type is suffixed with “!” it indicates that the field must not be null and will always return a value.

const typeDefs = gql`
  type Film {
    id: ID
    title: String!
    year: Int
    watched: Boolean
  }
  input FilmFilter {
    year_gte: Int
    year_lte: Int
  }
  type Query {
    # Return a list of films, optionally filtered by watched status, year, or search term in the title
    films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film]
    film(id: ID!): Film
  }
`;

After the Schema we have queries defined. If you define a query without any parameters e.g. films: [Film] and try to use a parameter in your query GraphQL will complain…loudly with a GRAPHQL_VALIDATION_FAILED error.

Here we have defined two queries.

  1. films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film] – return a list of Film objects. We can optionally have zero or all of the following query parameters – watched, year, searchTerm, where(this is used to support range queries on the year field)
  2. film(id: ID!) – return a single Film. The id parameter MUST be specified

Getting Started: A Simple Implementation

I have created a Github repo for this tutorial. It is straightforward to follow. Once you have cloned the repository, open readme.md for instructions. Alternatively read on!

git clone https://github.com/jmwollny/lab.git
cd lab/graphql-tutorial
npm install

Once the dependencies have been installed you can run the Apollo server.

node index.js

You may be thinking, okay I’ve defined the Schema and the Queries, where do I get the data from and how do I map the queries to the underlying datasource?

The list of films is a hard-coded array defined in index.js. In practice we would be calling out to one or more data sources to get this information.

To map and filter the queries, this is where resolvers come in.

Open a terminal

cd lab/qraphql-tutorial

open index.js. This file contains the Schema, Queries and Resolvers and starts the Apollo server. In a production environment these would be split out into different files. We are using a single file to keep things simple.

At the bottom this file you will see the resolvers definition. Inside the films arrow function we can create filters for each of our defined query parameters.

To filter the dataset we check for the presence of the query parameter and perform the filter using the built-in Javascript filter function. We make sure to use the filtered list in any filters that follow.

When we are done we just return the list to the server.

const resolvers = {
  Query: {
    films: (parent, args) => {
      let filteredFilms = films;
      // Watch filter
      if (args.watched !== undefined) {
        filteredFilms = filteredFilms.filter(f => f.watched ===    args.watched);
      }   
      // Year filter
      if (args.year) {
        filteredFilms = filteredFilms.filter(f => f.year === args.year);
      }
      // Date range filter
      if (args.where) {
        if (args.where.year_gte) {
          filteredFilms = filteredFilms.filter(f => f.year >= args.where.year_gte);
        }
        if (args.where.year_lte) {
          filteredFilms = filteredFilms.filter(f => f.year <= args.where.year_lte);
        }
      }

      // Search filter
      if (args.searchTerm) {
        filteredFilms = filteredFilms.filter(f => 
          f.title.toLowerCase().includes(args.searchTerm.toLowerCase())
        );
      }
      
      return filteredFilms;
    },
    film: (parent, args) => films.find(f => f.id === args.id),
  },
};

GraphQL queries

For those used to SQL these may look a little odd at first but they are quite straightforward once you get the hang of of the syntax.

A simple query

Let’s retrieve the full list of films. Open your browser at http://localhost:4000/ then click the Query your Server button. If all is well the sandbox will open. Paste the following query.

query GetAllFilms {
  films {
    title
    watched
    year
   }
}

GetAllFilms is the name we give to our query. It can be anything that succinctly describes our query! Next we indicate that we want to execute the films query and return a list with title, watched and year fields. Note: you need to supply at least one field to be returned in the output.

Well done, you have succesfully run your first GraphQL query 🙂 This is what is returned.

Using a filter in our query

Say we wanted all films containing the word “dracula” that were made in the 1970s and that we haven’t watched.

In order to specify a range we need to define some extra variables to support the query. In our case we need year_gte and year_lte to define our bounds.

input FilmFilter {
  year_gte: Int
  year_lte: Int
}

We then define a where query parameter that uses the FilmFilter

films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film]

The last piece of the puzzle is to update our resolver to use our new variables.

// Date range filter
if (args.where) {
  if (args.where.year_gte) {
    filteredFilms = filteredFilms.filter(f => f.year>=args.where.year_gte);
  }
  if (args.where.year_lte) {
    filteredFilms = filteredFilms.filter(f => f.year <= args.where.year_lte);
  }
}

Finally we craft our query using the new where parameter.

query GetSeventiesDracula {
  films(
    where: { year_gte: 1970, year_lte: 1979 }, 
    searchTerm: "dracula", 
    watched: true) {
      id
      title
      year
      watched
  }
}

A best practice when it comes to GraphQL is to separate the query data from the query itself. This is accomplished using variables. In the sandbox the variables JSON can be entered in the area underneath the query text box. Our new query will look like this.

query GetSeventiesDraculaWithVars($where: FilmFilter, $searchTerm: String, $watched: Boolean) {
  films(where: $where, searchTerm: $searchTerm, watched: $watched) {
    id
    title
    year
    watched
  }
}

After the query name we pass in the list of variables that we will provide values for, along with their type. $where: FilmFilter, $searchTerm: String, $watched: Boolean). In the film query instead of declaring the values we provide placeholders prefixed by ‘$’.

All that remains is to provide the query with solid values. The JSON will look like this.

{
  "where": {
    "year_gte": 1970,
    "year_lte": 1979
  },
  "searchTerm": "dracula",
  "watched": true
}

Alternatively you can open a terminal session and use the curl command as shown below.

curl -X POST http://localhost:4000/ \
-H "Content-Type: application/json" \
-d '{
  "query": "query GetFilteredFilms($where: FilmFilter, $searchTerm: String, $watched: Boolean) { films(where: $where, searchTerm: $searchTerm, watched: $watched) { id title year watched } }",
  "variables": {
    "where": {
      "year_gte": 1970,
      "year_lte": 1979
    },
    "searchTerm": "dracula",
    "watched": true
  }
}'

Query results

{"data":{"films":[{"id":"133","title":"Dracula A.D. 1972","year":1972,"watched":true},{"id":"142","title":"The Satanic Rites of Dracula","year":1973,"watched":true}]}}

When to Use GraphQL

While GraphQL is powerful, it isn’t always the “REST-killer.”

Use GraphQL When…Use REST When…
You have complex, nested data requirements.Your app is simple with few resources.
You support multiple clients (Web, iOS, Android) with different data needs.You need standard HTTP caching mechanisms.
You want to aggregate data from multiple microservices.You are building a very small, lightweight microservice.

Final Thoughts

GraphQL shifts the power from the server to the client. By allowing the frontend to dictate the data structure, it speeds up development cycles and reduces the payload sent over the wire. Once you get to grips with the extra boilerplate and query syntax, it is surprisingly easy to use.

You may now be asking, “well, this is all well and good, but how do I update or delete records from the database?” Well, dear reader that is the topic for my next article.

Easy as PI Weather Station – putting it all together

Sorry it has taken me so long to continue with this series. There were little things that got in the way such as C*****19, and going through redundancy but lets put those little things aside and recap. Last time we created a web service using node Express which will be used to capture environmental data from our Raspberry PI Sense Hat.

In this article we are going to hook things up by sending the data collected from the Raspberry PI, to our web service. We will also be updating our endpoints to handle the data correctly. Let’s get started!

First of all open the collector.py file.

We are going to POST the data to our web service endpoint. Find the line where we are checking if we have reached the interval and replace it with the code shown here.

if minute_count == MEASUREMENT_INTERVAL:                                                                                 
    # Create the payload object                                                                                          
    payload = {                                                                                                              
      'date':dt.strftime("%Y/%m/%d %H:%M:%S"),                                                                             
      'temperature':round(temp_c,2),                                                                                       
      'pressure':round(sense.get_pressure(),2),                                                                            
      'humidity':round(sense.get_humidity(),2),                                                                        
    }                                                                                                                     
    data = urllib.urlencode(payload)                                                                                     
    request = urllib2.Request(END_POINT,data)                                                                            
    response = urllib2.urlopen(request).read()                                                                           
    print(payload)                                                                                           
    minute_count = 0;

We are using a couple of Python libraries called urllib and urllib2 to do the heavy lifting of encoding our payload and sending it across to our Node.js server.

All that is left is to add the new endpoint to our Node.js server to process the request and update the list to return an actual list of weather data. Exciting eh! Open up another terminal session and navigate to the server directory. Using your editor of choice open up the index.js file.

Update the endpoints as shown below.

// Provide service meta data
app.get('/api/environment/meta', (req,res) => {
    res.header("Access-Control-Allow-Origin", "*");
    res.send({
        averageTemp:averageTemp,
        count:data.length,
        lastEntry:lastEntry
    } );
} );
// List all entries
app.get('/api/environment/entries', (req,res) => {
    res.header("Access-Control-Allow-Origin", "*");
    res.send(data);
} );
app.post('/api/environment', (req,res) => {
    if (!isValid(req.body)) {
        res.status(400).send('Invalid request, required fields missing.');
        return;
    }
    const count = data.length + 1;
    const entry = {
        id:count,
        date:req.body.date,
        temperature:req.body.temperature,
        pressure:req.body.pressure,
        humidity:req.body.humidity
    }
    lastEntry = entry;
    total += parseFloat(req.body.temperature);
    averageTemp = total / count;
    data.push(entry);
    res.json(entry);
} );

You may recall last time we added a dummy /api/environment/entries endpoint which simply returned an empty array.

Let’s flesh this out. The endpoint is defined as a POST method which means data is sent as part of the body of the request. We validate that we do indeed have a body then update the count metric. We then build a JSON object by pulling out the parts of the request we are interested in. Finally we update the lastEntry variable, work out the average temperature to date before updating our list.

With these changes in place we can run our collector and Node.js server to see the end to end implementation working in all its glory. I would recommend opening two separate terminals and laying them out side-by-side.

In the terminal for the Python collector start the data harvest using the command python collector.py. On your PI you should see regular temperature updates on the matrix display.

Raspberry PI Weather Station
Weather station running in the Raspberry PI

In the second terminal ensure you are in the collector/server directory and start the Node.js server using the command node index.js. If all is well you will see the message Listening on port 3000.

Terminal sessions running the collector and  Node.js server
Terminals sessions showing the collector and Node.js server running on the PI

After a while you will see entries printed in the server console indicating that the weather data has collected from the PI and sent it to our server.

Now comes the exciting bit. We can try out our new endpoints. Open a new browser tab and check the new endpoints are functioning correctly.

http://raspberrypi:3000/api/environment/entries

http://raspberrypi:3000/api/environment/meta

The new endpoints shown using the RESTED Chrome extension

Well there we have it. A simple way of using your PI to collect weather data. I hope this has been useful and inspired you to create your own projects using the PI!!

Easy as PI Weather Station – collecting the data

I inherited a Raspberry PI from a work colleague. It was complete with the Astro PI Hat(now known as the Sense Hat). This marvellous add-on gives a variety of environmental sensors such as temperature, humidity and air pressure.

For a while it sat there on the shelf dusty and forelorn. While doing some work in my shed I wondered whether I could use my PI to gather information and display it on the matrix display. I found a marvellous article on how to create a weather station by John M. Wargo. It is well worth a read. In the article, data is collected at regular intervals from the PI sensors and then uploaded to an external site hosted by weather underground. In no time I had a working weather station. I tweaked the script to show a line graph but it was a little janky because of the low resolution of the display.

Here is the PI in all its glory showing realtime temperature readings in graph form

In this post we are going to do something similar. We are going to collect data but upload it on our own server running a REST API. Then we are going to display this information on a lovely D3 chart. Wait! What? Yes, that is a lot to take in but fear not, this is going to be a 3 part post. The first part? Getting the data from the PI.

Let’s assume we have a fresh PI complete with an Astro Hat. Log in to your PI using puTTY or another application. I connect my PI direct to my laptop using an Ethernet cable but a wireless connection will work as well.

Now in your home directory(in my case /home/pi) create a new directory called collector

mkdir collector

Next use your editor of choice to create a file in the collector directory called collector.py. I use nano so in this case type nano collector.py.

Below is the code for collector.py. I ‘ll skip the first few functions. get_cpu_temp(), get_smooth() and get_temp(). These are used to try and get an accurate temperature reading because the Astro PI hat is affected by the heat given off y the PI CPU. These functions try and make allowances for that. Details here. If you can physically separate your PI using a ribbon cable then you can simply take the standard reading from the humidity sensor as detailed in the Sense Hat API.

#!/usr/bin/python
'''*******************************************************************************************************************************************
* This program collects environmental information from the sensors in the Astro PI hat and uploads the data to a server at regular intervals *
*******************************************************************************************************************************************'''
from __future__ import print_function
import datetime
import os
import sys
import time
import urllib
import urllib2
from sense_hat import SenseHat
# ============================================================================
# Constants
# ============================================================================
# specifies how often to measure values from the Sense HAT (in minutes)
MEASUREMENT_INTERVAL = 5 # minutes
def get_cpu_temp():
    # 'borrowed' from https://www.raspberrypi.org/forums/viewtopic.php?f=104&t=111457
    # executes a command at the OS to pull in the CPU temperature
    res = os.popen('vcgencmd measure_temp').readline()
    return float(res.replace("temp=", "").replace("'C\n", ""))
# use moving average to smooth readings
def get_smooth(x):
    # do we have the t object?
    if not hasattr(get_smooth, "t"):
        # then create it
        get_smooth.t = [x, x, x]
    # manage the rolling previous values
    get_smooth.t[2] = get_smooth.t[1]
    get_smooth.t[1] = get_smooth.t[0]
    get_smooth.t[0] = x
    # average the three last temperatures
    xs = (get_smooth.t[0] + get_smooth.t[1] + get_smooth.t[2]) / 3
    return xs
def get_temp():
    # ====================================================================
    # Unfortunately, getting an accurate temperature reading from the
    # Sense HAT is improbable, see here:
    # https://www.raspberrypi.org/forums/viewtopic.php?f=104&t=111457
    # so we'll have to do some approximation of the actual temp
    # taking CPU temp into account. The Pi foundation recommended
    # using the following:
    # http://yaab-arduino.blogspot.co.uk/2016/08/accurate-temperature-reading-sensehat.html
    # ====================================================================
    # First, get temp readings from both sensors
    t1 = sense.get_temperature_from_humidity()
    t2 = sense.get_temperature_from_pressure()
    # t becomes the average of the temperatures from both sensors
    t = (t1 + t2) / 2
    # Now, grab the CPU temperature
    t_cpu = get_cpu_temp()
    # Calculate the 'real' temperature compensating for CPU heating
    t_corr = t - ((t_cpu - t) / 1.5)
    # Finally, average out that value across the last three readings
    t_corr = get_smooth(t_corr)
    # convoluted, right?
    # Return the calculated temperature
    return t_corr

The main meat is in the, erm, main() function. Here we set up a loop to poll the PI at regular intervals, every 5 seconds so that the smoothing algorithm works effectively. The data is sent to the web service every 5 minutes. This is specified in the global variable MEASUREMENT_INTERVAL.

def main():
    global last_temp
    sense.clear()
    last_minute = datetime.datetime.now().minute;
    minute_count = 0;
    # infinite loop to continuously check weather values
    while True:
        dt = datetime.datetime.now()
        current_minute = dt.minute;
        temp_c = get_temp();
        # The temp measurement smoothing algorithm's accuracy is based
        # on frequent measurements, so we'll take measurements every 5 seconds
        # but only upload on measurement_interval
        current_second = dt.second
        # are we at the top of the minute or at a 5 second interval?
        if (current_second == 0) or ((current_second % 5) == 0):
            message = "{}C".format(int(temp_c));
            sense.show_message(message, text_colour=[255, 0, 0])
        if current_minute != last_minute:
            minute_count +=1
        if minute_count == MEASUREMENT_INTERVAL:
            print('Logging data from the PI')
            payload = {
                'date':dt.strftime("%Y/%m/%d %H:%M:%S"),
                'temperature':round(temp_c,2),
                'pressure':round(sense.get_pressure(),2),
                'humidity':round(sense.get_humidity(),2),
            }
            print(payload)
            # TODO post the results to our server
            minute_count = 0;
        last_minute = current_minute
        # wait a second then check again
        time.sleep(2)  # this should never happen since the above is an infinite loop
    print("Leaving main()")
# ============================================================================
# initialize the Sense HAT object
# ============================================================================
try:
    print("Initializing the Sense HAT client")
    sense = SenseHat()
    sense.set_rotation(90)
except:
    print("Unable to initialize the Sense HAT library:", sys.exc_info()[0])
    sys.exit(1)
print("Initialization complete!")
# Now see what we're supposed to do next
if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        print("\nExiting application\n")
        sense.clear()
        sys.exit(0)

A JSON object is used to hold this data which will look something like this

{
  "date": "2020/05/26 14:01:00",
  "pressure": 1035.2,
  "temperature": 28.36,
  "humidity": 39.82
}

This will be used as the payload to our web service. More to follow in part 2…