SOLID gold

If there is one thing I have learnt during my career in the IT industry, it’s that the industry is a fickle beast. Trends and fashions come and go. Languages fall by the wayside(hey COBOL74!). How often have you read an article declaring a new framework a “game changer”, only to realise that after using it in anger it does a fraction of what a venerable equivalent does in it’s sleep?

In this article I’m going to cover something that has not changed and has not gone out of fashion. It crops up again and again.

If there’s one thing you need to learn and more importantly USE, as a software engineer it is encapulated(see what I did there?) in these 5 principles. But hey, enough of my yakkin’, whaddaya say? Let’s boogie!

The SOLID principles are a set of five design guidelines in object-oriented software development that help engineers create systems that are easy to maintain, scale, and understand. Introduced by Robert C. Martin, these principles aim to reduce “code rot” and make software more robust.


1. Single Responsibility Principle (SRP)

“A class should have one, and only one, reason to change.”

This principle states that a component should perform a single function. When a class handles multiple unrelated tasks, it becomes fragile. A change in one task might accidentally break another. You might be tempted to add a small related function, but don’t do it. Do what is right and create a new class even if it has one function. Smaller classes are great. Less dependencies, easier to test. What’s not to like?

  • Example: Imagine a User class that handles both user data and saving that data to a database. If you change your database schema, you have to modify the User class.
  • Better Approach: Create a User class for data and a UserRepository class for database operations.

2. Open/Closed Principle (OCP)

“Software entities should be open for extension, but closed for modification.”

This somewhat opaquely named principle states that you should be able to add new functionality to a system without changing existing code. This prevents bugs from being introduced into parts of the application that are already working. It comes down to my tenet of minimal code change. Remember every code change has possibility to introduce bugs!

  • Example: A Discount class that uses a series of if/else statements to check for “VIP” or “Seasonal” discounts. Adding a new discount type requires changing the existing logic.
  • Better Approach: Use an interface or abstract class DiscountStrategy. Each new discount type becomes a new class that implements this interface.

3. Liskov Substitution Principle (LSP)

“Subtypes must be substitutable for their base types.”

Barbara Liskov is a pioneer who fundamentally changed how we write and organize code. Before her work in the 1970s, code was often a messy “spaghetti” of instructions. Liskov pioneered the concept of Data Abstraction. She led the team that created CLU, a programming language that introduced the idea of “abstract data types”—the direct ancestor of the “Classes” and “Objects” we use in almost every modern language like Java, Python, and C++. I hope you enjoyed that little history lesson. Let’s proceed.

This principle states that if a program is using a base class, it should be able to use any of its subclasses without knowing it or causing errors. The subclass must honor the “contract” of the parent class.

  • Example: A classic violation is the “Square-Rectangle” problem. If a Square inherits from Rectangle but throws an error when the height and width are set to different values, it breaks the program’s expectations.
  • Better Approach: If a subclass cannot perform the actions of the parent in the same way, they likely shouldn’t share that specific inheritance hierarchy.

4. Interface Segregation Principle (ISP)

“Clients should not be forced to depend on methods they do not use.”

I’ve seen this many times! You have to implement an interface in order to use a specific API call. You do this but realise you have to implement functions you are not interested in, leading to the dreaded “not implemented” comment. This can be partly remedied by using the Adapter Pattern by the way if you come across it.

It is better to have many small, specific interfaces than one large, “fat” interface. This prevents implementing classes from being burdened with “dummy” methods that do nothing.

  • Example: An IMachine interface with Print(), Scan(), and Fax(). A basic Printer class would be forced to implement Scan() and Fax() even if it can’t perform those actions.
  • Better Approach: Break the interface into IPrinter, IScanner, and IFax machine.

5. Dependency Inversion Principle (DIP)

“Depend on abstractions, not concretions.”

High-level modules (the logic) should not depend on low-level modules (the tools). Both should depend on abstractions (interfaces). This “decouples” the code, making it easy to swap out components.

This is great for writing tests, and you should be writing tests, many, many tests!! It allows you to easily mock the dependencies.

  • Example: A NotificationService that directly creates an instance of EmailSender. If you want to switch to SMSSender, you have to rewrite the NotificationService.
  • Better Approach: The NotificationService should depend on an IMessageSender interface. You can then “inject” whichever sender you need at runtime.

Conclusion

At the end of the day, SOLID is about managing change. Requirements shift, APIs evolve, and businesses pivot.

By following these five principles, you aren’t just writing code for today; you’re leaving a map for the developer who has to touch this file six months from now. It turns software from a fragile house of cards into a robust, modular system.

Before I go, here is a test. Write some code. Store it away for a year. Look at your code. Is it still readable, understandable. Is is SOLID?

GraphQL Part 3 – Persistence with MongoDB

In our post, we mastered Mutations. We can now query, add, update, and delete films from our Hammer collection. However, every time we restart our Apollo server, our changes vanish into the ether. Our “Watched” list resets, and that film we deleted? It’s back from the dead—and not in a cool, technicolor, cinematic way.

To fix this, we need Data Persistence. In this post, we’ll swap our humble, local JavaScript array for a MongoDB database.

If you haven’t done so already, clone the lab Github repository using

git clone https://github.com/jmwollny/lab.git

Install MongoDB

I’m installing on a Mac, if you what to install MongoDB on other systems go to the MongoDB download page here.

brew tap mongodb/brew
brew install mongodb-community

Now start the MongoDB server. This command ensures that the MongoDB server will restart at logon.

brew services start mongodb/brew/mongodb-community

Now check we have a running instance. Type mongosh. If the shell appears you are golden and are ready to proceed to the next section. Type exit to leave the shell.

Setting Up the MongoDB connection

First, we need to install the MongoDB driver. In your terminal, run:

cd lab/graphql-tutorial-3
npm install mongoose

Mongoose is an Object Data Modeling (ODM) library that makes talking to MongoDB from Node.js much easier. If you open index.js you will see that the films array has been replaced with a MongoDB connection to a database called hammer_films.

const mongoose = require('mongoose');

// Connect to your local or Atlas MongoDB instance
mongoose.connect('mongodb://localhost:27017/hammer_films', {
  useNewUrlParser: true,
  useUnifiedTopology: true
});

const db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', () => console.log('Connected to MongoDB!'));

Defining the Data Model

In GraphQL, we have a Schema. In MongoDB (via Mongoose), we have a Model. These two need to mirror each other so our data flows correctly. A new file called Film.js contains the MongoDB model which has been exported so it can be shared by seed.js(more about this later!).

const mongoose = require('mongoose');

const filmSchema = new mongoose.Schema({
  title: { type: String, required: true },
  year: { type: Number, required: true },
  watched: { type: Boolean, default: false }
});

// Export the model so both index.js and seed.js can use it
module.exports = mongoose.model('Film', filmSchema);

Updating the Resolvers

This is where the magic happens. Instead of using .find() or .splice() on a local array, we will use Mongoose methods which return Promises. GraphQL handles these asynchronous calls automatically.

The New Queries and mutations

const resolvers = {
  Query: {
    films: async (parent, args) => {
      // 1. Build a dynamic query object
      let query = {};

      // Watch filter
      if (args.watched !== undefined) {
        query.watched = args.watched;
      }

      // Year filter (Exact match)
      if (args.year) {
        query.year = args.year;
      }

      // Date range filter (using MongoDB operators $gte and $lte)
      if (args.where) {
        query.year = query.year || {}; // Initialize year object if it doesn't exist
        if (args.where.year_gte) {
          query.year.$gte = args.where.year_gte;
        }
        if (args.where.year_lte) {
          query.year.$lte = args.where.year_lte;
        }
      }

      // Search filter (using Regex for case-insensitive partial match)
      if (args.searchTerm) {
        query.title = { $regex: args.searchTerm, $options: 'i' };
      }

      // Execute the query against the database
      return await FilmModel.find(query);
    },

    // Find by ID - Mongoose maps GraphQL 'id' to MongoDB '_id' automatically
    film: async (parent, args) => await FilmModel.findById(args.id),
  },

  Mutation: {
    addFilm: async (parent, { input }) => {
      // Create a new instance and save it
      const newFilm = new FilmModel(input);
      return await newFilm.save();
    },

    updateWatched: async (parent, { id, watched }) => {
      const updatedFilm = await FilmModel.findByIdAndUpdate(
        id,
        { watched },
        { new: true }, // This flag returns the record *after* it was updated
      );

      if (!updatedFilm) {
        throw new Error('Film not found');
      }

      return updatedFilm;
    },

    deleteFilm: async (parent, { id }) => {
      const deleted = await FilmModel.findByIdAndDelete(id);
      if (!deleted) {
        throw new Error('Film not found');
      }

      return await FilmModel.find();
    },
  },
};

Testing Persistence

Restart your server with node index.js. Now, head back to your GraphQL sandbox at http://localhost:4000/. We can test that after adding a film and restarting our Apollo server, the film still exists!

In the sandbox run a mutation to add a new film.

mutation CreateFilm($input: CreateFilmInput!) {
  addFilm(input: $input) {
    id
    title
    year
    watched
  }
}

Remember to add the variables JSON.

{
  "input": {
    "title": "The Brides of Dracula",
    "year": 1960,
    "watched": false
  }
}

Run the query then shut down your server (Ctrl + C in the terminal). Start the server again usig node index.js

Run a query to retrieve all films.

query GetAllFilms {
  films {
    id
    title
    watched
    year
  }
}

Query result

{
  "data": {
    "films": [
      {
        "id": "69dfb8e8067cfb4bcaadeb6d",
        "title": "The Brides of Dracula",
        "watched": false,
        "year": 1960
      }
    ]
  }
}

If all has gone well, your data is still there! Unlike our local array, MongoDB has written this data to the disk.

Why use Mongoose with GraphQL?

You might notice that our FilmModel and our GraphQL type Film look very similar. This redundancy is actually a strength. The GraphQL Schema acts as a contract for your frontend (telling it what data it can ask for), while the Mongoose Model acts as a gatekeeper for your database (telling it how the data must be stored).

The “ID” Gotcha

MongoDB uses a field called _id by default. GraphQL usually expects id. Mongoose is smart enough to provide a virtual id field that maps to _id, so the existing queries like film(id: "...") continue to work without a hitch.

Importing the full list of films

Let’s finish by importing our film list into MongoDB, then we can get down to the fun job of watching every one and marking them as watched as we go.

To do this I have provided a handy script. Running the script will clear the database and import all films. All you need to do is open a terminal and run node seed.js.

node seed.js
Connected to MongoDB for seeding...
Old records removed.
157 Hammer films successfully added to the database!

Let’s have a look at the script.

const mongoose = require('mongoose');
const fs = require('fs');
// Import your Mongoose model
const Film = require('./models/Film'); 

const seedDatabase = async () => {
  try {
    // Connect to MongoDB
    await mongoose.connect('mongodb://127.0.0.1:27017/hammer_films');
    console.log("Connected to MongoDB for seeding...");

    // Read the JSON file
    const data = JSON.parse(fs.readFileSync('./films.json', 'utf-8'));

    // Clear existing films
    await Film.deleteMany({});
    console.log("Old records removed.");

    // Bulk insert the data
    await Film.insertMany(data);
    console.log(`${data.length} Hammer films successfully added to the database!`);

    // Close the connection
    process.exit();
  } catch (error) {
    console.error("Error seeding database:", error);
    process.exit(1);
  }
};

seedDatabase();

This script does the following:

  • Connects to MongoDB
  • Parses the list of films(note: we do not need the id anymore in the JSON)
  • Deletes all records in the database
  • Inserts all records defined in the JSON file

Conclusion

We’ve successfully moved our Hammer database from a “temporary” state to a “permanent” one. By integrating MongoDB, we’ve laid the groundwork for a real-world application. We are no longer just playing with variables in memory. we are managing a persistent data store. As an exercise try creating different queries or if you are feeling brave add more fields to the schema. Have fun coding and if you feel inclined watching one of the suggested films 🙂

GraphQL Part 2 – Mastering Mutations

In our last post, we built a robust way to search through 157 Hammer classics. But what happens when you finally sit down to watch The Brides of Dracula? You need a way to update that record.

In GraphQL, any operation that changes data is called a Mutation.

If you have not followed part 1 of this tutorial go there now to pull the code from my Github repository.

1. Updating the Schema

First, we need to tell our server what these changes look like. We’ll add a Mutation type to our typeDefs. We have one to update a film entry(updateWatched) and one to delete a film(deleteFilm).

type Mutation {
  # Toggle the watched status of a film
  updateWatched(id: ID!, watched: Boolean!): Film
  
  # Delete a film from our collection
  deleteFilm(id: ID!): [Film]
}

2. Writing the Resolvers

Now, we implement the logic to update and delete a film record. Since we’re working with a local array of films data we’ll use standard JavaScript array methods to find and modify our data. Here we are using splice and find.

const resolvers = {
  // ... query resolvers
  
  Mutation: {
    updateWatched: (parent, { id, watched }) => {
      const film = films.find(f => f.id == id);
      if (!film) {
        throw new Error("Film not found");
      }
      
      film.watched = watched;
      return film;
    },
    
    deleteFilm: (parent, { id }) => {
      const index = films.findIndex(f => f.id == id);
      if (index == -1) {
        throw new Error("Film not found");
      }
      
      // Remove the film and return the updated list
      films.splice(index, 1);
      return films;
    }
  }
};

3. Testing in the Playground

Once you restart your server, you can test these live.

node index.js
🚀 Server ready at http://localhost:4000/

Navigating to http://localhost:4000/ will redirect to the GraphQL sandbox. Click the ‘Query your server’ button and you will be presented with something like this.

First of all we need to find all unwatched Dracula films returning the id. Quick quiz! Do you remember how to craft the query? Here it is:

query GetNotWatched {
  films(watched: false, searchTerm: "dracula") {
    id
    title
  }
}

This will return the following.

{
  "data": {
    "films": [
      {
        "id": "70",
        "title": "The Brides of Dracula"
      },
      {
        "id": "102",
        "title": "Dracula: Prince of Darkness"
      },
      {
        "id": "115",
        "title": "Dracula Has Risen from the Grave"
      },
      {
        "id": "122",
        "title": "Scars of Dracula"
      },
      {
        "id": "125",
        "title": "Countess Dracula"
      }
    ]
  }
}

Pick a film and remember the ID. This will be used in the next step. In my case I will pick the first film which has an id of 70.

Mark “Brides of Dracula” (ID: 70) as watched:

Paste this query into the sandbox(remember to substitute your own id if it is different).

mutation {
  updateWatched(id: "70", watched: true) {
    title
    watched
  }
}

After running the update query run the GetNotWatched query again to check that the film is not in the list.

Removing a film

Let’s remove “Brides of Dracula”.

mutation {
  deleteFilm(id: "70") {
    title
  }
}

If we run a query to return the film with id 70 GraphQL will now return null.

query GetFilm {
  film(id: 70) {
    id
    watched
    year
  }
}

Results from the query

{
  "data": {
    "film": null
  }
}

Adding a film

Let’s add the film back!

When adding a record, passing four separate arguments (ID, Title, Year, Watched) can get messy. Instead, we define an input type in our schema to group them together.

Update the Schema

We create a new input which specifies which fields are mandatory when creating a new Film. In our case all fields must be specified(indicated by the “!”). This input spec is then specified in our addFilm mutation.

input CreateFilmInput {
  id: ID!
  title: String!
  year: Int!
  watched: Boolean!
}

type Mutation {
  # ... previous mutations
  addFilm(input: CreateFilmInput!): Film
}

Update the Resolver

const resolvers = {
  Mutation: {
    // ... updateWatched and deleteFilm
    
    addFilm: (parent, { input }) => {
      // Check if ID already exists to prevent duplicates
      const exists = films.find(f => f.id === input.id);
      if (exists) {
        throw new Error("A film with this ID already exists.");
      }

      const newFilm = { ...input };
      films.push(newFilm);
      return newFilm;
    }
  }
};

Testing the “Add” Mutation

Enter the query into the sandbox

mutation CreateNewHammerFilm($input: CreateFilmInput!) {
  addFilm(input: $input) {
    id
    title
    year
  }
}

Under the query box, enter the variables JSON.

{
  "input": {
    "id": "70",
    "title": "The Brides of Dracula",
    "year": 1960,
    "watched": false
  }
}

After running the mutation you can run the GetFilm query, which will show the resurrected film in all its glory!

query GetFilm {
  film(id: 70) {
    id
    watched
    year
  }
}

Why use input types?

Using an input object instead of flat arguments makes your API much more maintainable. If you decide to add a director or studio field later, you only have to update the input type, rather than changing the signature of the mutation across your entire codebase.

Why “Mutation” instead of “Query”?

While you could technically change data inside a Query resolver, it’s a major “no-go” in the GraphQL world. Using the Mutation keyword tells the server (and other developers) that this operation has side effects. It also ensures that if you send multiple mutations in one request, they run serially (one after another) to prevent data race conditions.

Conclusion

We’ve come a long way from a simple JavaScript array of my favourite films. By implementing Mutations, we’ve transformed our Hammer dataset into a functional API. We can now:

  • Create new entries to keep our database growing.
  • Update existing records to track our viewing progress.
  • Delete entries to keep our data clean and accurate.

This “CRUD” (Create, Read, Update, Delete) cycle is the backbone of almost every application you use daily. While we are currently managing this data in local memory via a simple array, the patterns we’ve used here—Input Types, Non-Nullable arguments, and Serial Mutation execution—are the exact same patterns you would use when connecting to a production-grade database like MongoDB or PostgreSQL.

What’s Next?

Now that the backend logic is solid, the next logical step is to explore Data Persistence. In the next post, we’ll look at how to hook this GraphQL server up to a database so that our “Watched” status doesn’t disappear every time we restart the server!

Until then, happy coding!

GraphQL Part 1 – A Modern Approach to APIs

For years, REST (Representational State Transfer) has been the standard for web services. However, as applications grow in complexity, developers often find themselves juggling dozens of endpoints and dealing with over-fetching data.

GraphQL is a query language for your API and a server-side runtime for executing those queries using a type system you define for your data. Instead of multiple “dumb” endpoints, GraphQL provides a single “smart” endpoint that can return exactly what the client asks for.


Why GraphQL?

  • No More Over-fetching: You get exactly the data you request—nothing more, nothing less.
  • Single Request, Multiple Resources: You can fetch data from different sources in one trip to the server.
  • Strongly Typed: GraphQL uses a schema to define what is possible, which acts as a contract between the frontend and backend.
  • Self-Documenting: Because of the schema, tools like GraphiQL allow you to browse the API structure effortlessly.

For this tutorial we will be working with a list of classic Hammer Studios films. Each film will have id, title, year and watched fields.

The code below shows example Schema and the Query definitions. The schema defines a Film as having four fields. Where the field type is suffixed with “!” it indicates that the field must not be null and will always return a value.

const typeDefs = gql`
  type Film {
    id: ID
    title: String!
    year: Int
    watched: Boolean
  }
  input FilmFilter {
    year_gte: Int
    year_lte: Int
  }
  type Query {
    # Return a list of films, optionally filtered by watched status, year, or search term in the title
    films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film]
    film(id: ID!): Film
  }
`;

After the Schema we have queries defined. If you define a query without any parameters e.g. films: [Film] and try to use a parameter in your query GraphQL will complain…loudly with a GRAPHQL_VALIDATION_FAILED error.

Here we have defined two queries.

  1. films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film] – return a list of Film objects. We can optionally have zero or all of the following query parameters – watched, year, searchTerm, where(this is used to support range queries on the year field)
  2. film(id: ID!) – return a single Film. The id parameter MUST be specified

Getting Started: A Simple Implementation

I have created a Github repo for this tutorial. It is straightforward to follow. Once you have cloned the repository, open readme.md for instructions. Alternatively read on!

git clone https://github.com/jmwollny/lab.git
cd lab/graphql-tutorial
npm install

Once the dependencies have been installed you can run the Apollo server.

node index.js

You may be thinking, okay I’ve defined the Schema and the Queries, where do I get the data from and how do I map the queries to the underlying datasource?

The list of films is a hard-coded array defined in index.js. In practice we would be calling out to one or more data sources to get this information.

To map and filter the queries, this is where resolvers come in.

Open a terminal

cd lab/qraphql-tutorial

open index.js. This file contains the Schema, Queries and Resolvers and starts the Apollo server. In a production environment these would be split out into different files. We are using a single file to keep things simple.

At the bottom this file you will see the resolvers definition. Inside the films arrow function we can create filters for each of our defined query parameters.

To filter the dataset we check for the presence of the query parameter and perform the filter using the built-in Javascript filter function. We make sure to use the filtered list in any filters that follow.

When we are done we just return the list to the server.

const resolvers = {
  Query: {
    films: (parent, args) => {
      let filteredFilms = films;
      // Watch filter
      if (args.watched !== undefined) {
        filteredFilms = filteredFilms.filter(f => f.watched ===    args.watched);
      }   
      // Year filter
      if (args.year) {
        filteredFilms = filteredFilms.filter(f => f.year === args.year);
      }
      // Date range filter
      if (args.where) {
        if (args.where.year_gte) {
          filteredFilms = filteredFilms.filter(f => f.year >= args.where.year_gte);
        }
        if (args.where.year_lte) {
          filteredFilms = filteredFilms.filter(f => f.year <= args.where.year_lte);
        }
      }

      // Search filter
      if (args.searchTerm) {
        filteredFilms = filteredFilms.filter(f => 
          f.title.toLowerCase().includes(args.searchTerm.toLowerCase())
        );
      }
      
      return filteredFilms;
    },
    film: (parent, args) => films.find(f => f.id === args.id),
  },
};

GraphQL queries

For those used to SQL these may look a little odd at first but they are quite straightforward once you get the hang of of the syntax.

A simple query

Let’s retrieve the full list of films. Open your browser at http://localhost:4000/ then click the Query your Server button. If all is well the sandbox will open. Paste the following query.

query GetAllFilms {
  films {
    title
    watched
    year
   }
}

GetAllFilms is the name we give to our query. It can be anything that succinctly describes our query! Next we indicate that we want to execute the films query and return a list with title, watched and year fields. Note: you need to supply at least one field to be returned in the output.

Well done, you have succesfully run your first GraphQL query 🙂 This is what is returned.

Using a filter in our query

Say we wanted all films containing the word “dracula” that were made in the 1970s and that we haven’t watched.

In order to specify a range we need to define some extra variables to support the query. In our case we need year_gte and year_lte to define our bounds.

input FilmFilter {
  year_gte: Int
  year_lte: Int
}

We then define a where query parameter that uses the FilmFilter

films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film]

The last piece of the puzzle is to update our resolver to use our new variables.

// Date range filter
if (args.where) {
  if (args.where.year_gte) {
    filteredFilms = filteredFilms.filter(f => f.year>=args.where.year_gte);
  }
  if (args.where.year_lte) {
    filteredFilms = filteredFilms.filter(f => f.year <= args.where.year_lte);
  }
}

Finally we craft our query using the new where parameter.

query GetSeventiesDracula {
  films(
    where: { year_gte: 1970, year_lte: 1979 }, 
    searchTerm: "dracula", 
    watched: true) {
      id
      title
      year
      watched
  }
}

A best practice when it comes to GraphQL is to separate the query data from the query itself. This is accomplished using variables. In the sandbox the variables JSON can be entered in the area underneath the query text box. Our new query will look like this.

query GetSeventiesDraculaWithVars($where: FilmFilter, $searchTerm: String, $watched: Boolean) {
  films(where: $where, searchTerm: $searchTerm, watched: $watched) {
    id
    title
    year
    watched
  }
}

After the query name we pass in the list of variables that we will provide values for, along with their type. $where: FilmFilter, $searchTerm: String, $watched: Boolean). In the film query instead of declaring the values we provide placeholders prefixed by ‘$’.

All that remains is to provide the query with solid values. The JSON will look like this.

{
  "where": {
    "year_gte": 1970,
    "year_lte": 1979
  },
  "searchTerm": "dracula",
  "watched": true
}

Alternatively you can open a terminal session and use the curl command as shown below.

curl -X POST http://localhost:4000/ \
-H "Content-Type: application/json" \
-d '{
  "query": "query GetFilteredFilms($where: FilmFilter, $searchTerm: String, $watched: Boolean) { films(where: $where, searchTerm: $searchTerm, watched: $watched) { id title year watched } }",
  "variables": {
    "where": {
      "year_gte": 1970,
      "year_lte": 1979
    },
    "searchTerm": "dracula",
    "watched": true
  }
}'

Query results

{"data":{"films":[{"id":"133","title":"Dracula A.D. 1972","year":1972,"watched":true},{"id":"142","title":"The Satanic Rites of Dracula","year":1973,"watched":true}]}}

When to Use GraphQL

While GraphQL is powerful, it isn’t always the “REST-killer.”

Use GraphQL When…Use REST When…
You have complex, nested data requirements.Your app is simple with few resources.
You support multiple clients (Web, iOS, Android) with different data needs.You need standard HTTP caching mechanisms.
You want to aggregate data from multiple microservices.You are building a very small, lightweight microservice.

Final Thoughts

GraphQL shifts the power from the server to the client. By allowing the frontend to dictate the data structure, it speeds up development cycles and reduces the payload sent over the wire. Once you get to grips with the extra boilerplate and query syntax, it is surprisingly easy to use.

You may now be asking, “well, this is all well and good, but how do I update or delete records from the database?” Well, dear reader that is the topic for my next article.

An AI Agentic Systems quick start

In this post I will look at the different types of agentic systems. AI is moving fast and it is easy to become confused with the constantly evolving technologies. Let’s start!

There are two main types
1. Workflows
2. Agents

Workflows

A workflow is a series of steps that follow a predefined, rigid path. Predictability is high because you know exactly what the system will do, but this comes at the expense of flexibility. If one steps breaks then the whole process if likely to fail. There are five main types of workflow.

Prompt Chaining

This is probably one of the most common types of workflow out there. Given an input the LLM(Large Language Model) carries out a task and optionally hands the results to some code which will transform or clean the results before passing to another LLM, then these results can be passed to the next LLM in the chain and so on…

The key is that you are breaking down a complicated single task into smaller, manageable steps.

Routing

The routing workflow is where an initial router LLM analyzes and categorizes an incoming query and directs it to the most appropriate specialized sub-task.

Parallelization

The input is passed to a coordinator (code) that breaks the task into independent pieces. These run simultaneously across multiple LLMs. The last sub-task is not an LLM but some code that will take the results and aggregate them. This is best for speed and processing large volumes of data.

Orchestrator/worker

Here, an LLM acts as the manager. It dynamically decides which sub-tasks are needed and assigns them to “workers”. The Orchestrator then synthesizes the various results into a final response. This is more flexible than standard parallelization because the “manager” adapts to the complexity of the query.

Evaluator

In this workflow you have two LLMs in a feedback loop. One is the generator and one in the evaluator. The first LLM takes the initial user prompt and creates a draft. The second one reviews the draft against a set of given criteria and provides detailed feedback. The generator receives the feedback and produces a second version. This continues until the evaluator produces a pass or has hit it’s “max loops” limit. Without a “max loops” limit, an Evaluator and Generator can sometimes get stuck in an infinite loop (and burn your API budget!!).

Agents

Unlike workflows, Agents use a reasoning loop to determine their own path. They are characterized by their ability to use Tools—like searching the web or executing code to solve open-ended problems.

The LLM gets to choose it’s own design and plot it’s own path to choose how it will solve the problem. This power makes them very powerful but less predictable.

What are the drawbacks?

  1. Unpredictable path – you do not know which order the sub-tasks will be run but tools are used to give an agent boundaries
  2. What quality will the output be?
  3. Unpredictable costs, you don’t know how long it will take to run

Mitigations

  1. Monitoring – it is essential have the visibility to understand what interactions are going on
  2. Guardrails – protection to ensure models are doing what they should be doing, safely, consistently and within the given boundaries

Conclusion

In the current landscape of AI engineering, Workflows remain the most popular choice for production-grade applications. This is because businesses value reliability and cost-control. Patterns like Prompt Chaining and Routing allow developers to build systems that are fast, explainable, and easy to debug. If you are building a customer support bot or an automated report generator, a structured workflow is usually your best bet.

However, the industry is rapidly shifting toward Agents and Evaluator-Optimizer loops for high-stakes or creative tasks. While more “expensive” in terms of compute and time, these systems provide a level of quality and autonomy that simple chains cannot match. They are becoming the standard for coding assistants, research tools, and complex problem-solving.

The Rule of Thumb

  • Use Workflows when the process is well-defined and you need 100% consistency.
  • Use Agents when the task is open-ended and the path to the solution is too complex to map out by hand.

Easy as PI Weather Station – putting it all together

Sorry it has taken me so long to continue with this series. There were little things that got in the way such as C*****19, and going through redundancy but lets put those little things aside and recap. Last time we created a web service using node Express which will be used to capture environmental data from our Raspberry PI Sense Hat.

In this article we are going to hook things up by sending the data collected from the Raspberry PI, to our web service. We will also be updating our endpoints to handle the data correctly. Let’s get started!

First of all open the collector.py file.

We are going to POST the data to our web service endpoint. Find the line where we are checking if we have reached the interval and replace it with the code shown here.

if minute_count == MEASUREMENT_INTERVAL:                                                                                 
    # Create the payload object                                                                                          
    payload = {                                                                                                              
      'date':dt.strftime("%Y/%m/%d %H:%M:%S"),                                                                             
      'temperature':round(temp_c,2),                                                                                       
      'pressure':round(sense.get_pressure(),2),                                                                            
      'humidity':round(sense.get_humidity(),2),                                                                        
    }                                                                                                                     
    data = urllib.urlencode(payload)                                                                                     
    request = urllib2.Request(END_POINT,data)                                                                            
    response = urllib2.urlopen(request).read()                                                                           
    print(payload)                                                                                           
    minute_count = 0;

We are using a couple of Python libraries called urllib and urllib2 to do the heavy lifting of encoding our payload and sending it across to our Node.js server.

All that is left is to add the new endpoint to our Node.js server to process the request and update the list to return an actual list of weather data. Exciting eh! Open up another terminal session and navigate to the server directory. Using your editor of choice open up the index.js file.

Update the endpoints as shown below.

// Provide service meta data
app.get('/api/environment/meta', (req,res) => {
    res.header("Access-Control-Allow-Origin", "*");
    res.send({
        averageTemp:averageTemp,
        count:data.length,
        lastEntry:lastEntry
    } );
} );
// List all entries
app.get('/api/environment/entries', (req,res) => {
    res.header("Access-Control-Allow-Origin", "*");
    res.send(data);
} );
app.post('/api/environment', (req,res) => {
    if (!isValid(req.body)) {
        res.status(400).send('Invalid request, required fields missing.');
        return;
    }
    const count = data.length + 1;
    const entry = {
        id:count,
        date:req.body.date,
        temperature:req.body.temperature,
        pressure:req.body.pressure,
        humidity:req.body.humidity
    }
    lastEntry = entry;
    total += parseFloat(req.body.temperature);
    averageTemp = total / count;
    data.push(entry);
    res.json(entry);
} );

You may recall last time we added a dummy /api/environment/entries endpoint which simply returned an empty array.

Let’s flesh this out. The endpoint is defined as a POST method which means data is sent as part of the body of the request. We validate that we do indeed have a body then update the count metric. We then build a JSON object by pulling out the parts of the request we are interested in. Finally we update the lastEntry variable, work out the average temperature to date before updating our list.

With these changes in place we can run our collector and Node.js server to see the end to end implementation working in all its glory. I would recommend opening two separate terminals and laying them out side-by-side.

In the terminal for the Python collector start the data harvest using the command python collector.py. On your PI you should see regular temperature updates on the matrix display.

Raspberry PI Weather Station
Weather station running in the Raspberry PI

In the second terminal ensure you are in the collector/server directory and start the Node.js server using the command node index.js. If all is well you will see the message Listening on port 3000.

Terminal sessions running the collector and  Node.js server
Terminals sessions showing the collector and Node.js server running on the PI

After a while you will see entries printed in the server console indicating that the weather data has collected from the PI and sent it to our server.

Now comes the exciting bit. We can try out our new endpoints. Open a new browser tab and check the new endpoints are functioning correctly.

http://raspberrypi:3000/api/environment/entries

http://raspberrypi:3000/api/environment/meta

The new endpoints shown using the RESTED Chrome extension

Well there we have it. A simple way of using your PI to collect weather data. I hope this has been useful and inspired you to create your own projects using the PI!!

Easy as PI Weather Station – create a Node.js web service in 5 minutes

Introduction

In the last article we created a Python script to collect environmental data from a Sense Hat equipped Raspberry PI.

This article will add to that by creating a web service that will display all logged entries. In the next blog post we will add the ability to upload data from PI to the web service.

This web service will be running on the Raspberry PI but of course it could run anywhere as long as it supplies an endpoint to enable consumers to use it.

Building a RESTful API – do’s and do not’s

The web service will use RESTful principles. REST is a set of best practises to use when designing an API. In a nutshell:

  • DO return JSON
  • DO set the Content-Type header correctly i.e. application/json. Note, when using the PATCH method the content type must be application/merge-patch+json
  • DON’T use verbs e.g. use /songs instead of listSongs/
  • DO use plurals e.g. /api/songs/2019
  • DO return error details in the response body
  • DO make use of status codes when returning errors
    • 400-bad request, 403-forbidden, 404-not found, 401-unauthorised, 500 server error
  • For CRUD operation return the following codes
MethodDescriptionURLResponse code
GETretrieve dataapi/customers200
POSTcreate dataapi/customers
{“name”:”jon”, “email”:”a@a.com”}
201
PUTupdate dataapi/customers/1
{“name”:”dave”,”email”:”b@a.com”}
200
DELETEdelete dataapi/customers/1204
PATCHupdate partial dataapi/customers/1
{“op”:”replace”,”path”:”/email”,”value”:”a@a.com”}]
204
REST method, actions and expected response codes.

Defining the endpoints

Our API will have three endpoints. This article is focussed on the first one, to list entries. The other two will be addressed in a later post.

/api/environment/entries – to list all entries

The resulting JSON will be something like this:

[
    {
        "id":1,
        "date":"2020/06/06 15:34:01",
        "temperature":"24.48",
        "pressure":"998.32",
        "humidity":"44.9"
    }
]

/api/environment/ – to create a new entry

/api/environment/meta – to retrieve metadata such as number of entries, average temperature and last entry that was uploaded

Creating the web service using Express

Let’s get started! Connect your PI to the network either wirelessly or using a cable. I use an Ethernet cable directly plugged it into my laptop.

  1. Power up your PI!
  2. SSH into your PI. I used PuTTY
  3. Navigate to the collector directory we created the last blog post.
  4. mkdir server
  5. cd server

We are going to use Node.js to create our server. Node.js is based on the Chrome V8 Javascript engine but has added modules to deal with IO, HTTP and many more. It is basically a Javascript engine wrapped in a C++ exe. It has a single-threaded handler which hands off requests in an asynchronous manner so it is very suitable to quick high-throughput requests.

Out of the box it is very easy to create a simple REST API. We will be using another node module called express which makes managing routing much easier.

So first things first if you haven’t already done so install node and npm on your PI. Here is a noice instructable showing how to do it.
https://www.instructables.com/id/Install-Nodejs-and-Npm-on-Raspberry-Pi/

When you have successfully installed node and npm return to the server directory we created earlier. Now we can install express which is a lightweight framework for creating REST APIs.
npm install express --save

Create a file called index.js using your editor of choice. I used nano.
nano index.js

Paste the following:

// import the express module and create the express app
const express = require('express');
const app = express();
// install middleware that can encode the payload
app.use(express.urlencoded({extended:false})); 
// create an array to hold the environmental data
const data = []; 
// End points for the web service
//list entries
app.get('/api/environment/entries', (req,res) => {
    res.send(data); //Just send at empty array for now
} );
// create a web server, running on your port of choice or 3000
const port = process.env.PORT || 3000;
app.listen(port,() => {
    console.log(Listening on port ${port});
} );

This server will respond to HTTP GET requests at the /api/environment/entries endpoint listening on port 3000.

Start the node server
node index.js

Open your browser and go to
http://raspberrypi:3000/api/environment/entries

The result will not be very exciting as you will just see an empty array returned in the browser. However, give yourself a pat on the back. You have created your first fledgling web service!

The four levels of a logon dialog

Today’s article is a bit of fun. We are looking at the four levels of styling a simple logon dialog. These UI components are pretty ubiquitous. Here are a few examples of HTML logon dialogs

Level 1 – basic styling

They all share the same elements, at the minimum two input boxes, an OK button and usually a cancel button. Sometimes there are labels next to each input box. Other times placeholder text is shown in the input controls which gets replaced with whatever you type into the input control. In addition there is often a link to reset the password should it be forgotten.

With this in mind our logon dialog will have two input controls with placeholder text, a single button and a link to reset a forgotten password. Let’s get to work. Our level 1 logon dialog will be vanilla. Some HTML and very little CSS. The CSS is there just to layout the control on the page. Here is our basic HTML.

<html lang="en">
	<head>
		<meta charset="utf-8">
		<title>Login</title>
		<meta name="description" content="Login to your account">
		<meta name="author" content="Jonathan">
		<link rel="stylesheet" href="css/styles.css?v=1.0">
	</head>
	<body>
		<div class="container">
			<div class="log-form">
				<h2>Login to your account</h2>
				<form>
					<input type="text" title="username" placeholder="username" />
					<input type="password" title="password" placeholder="password" />
					<button type="submit" class="btn">Login</button>
					<a class="forgot" href="#">Forgot Username?</a>
				</form>
			</div>
		</div>
	</body>
</html>

Nothing out of the ordinary here. We have defined a container div to allow it to be centred on the page. Inside this div we have another div which holds a standard form element. The form element has two input tags, a button and an anchor. All nice and simple. Let’s now take a look at the CSS

.container {
  display: flex;
  align-items: center;
  justify-content: center;
  height: 100%;
  width: 100%;
}
form {
  width: 100%;
}
input {
  display: block;
  width: 100%;
  margin-bottom: 2em;
  padding: .5em 0;
}
.btn {
  padding: .5em 2em;
}

The container div is styled to fit the whole browser and uses the flex layout to easily centre the logon dialog vertically and horizontally. Note the height and width need to be 100% for this to work. The rest of the CSS is concerned with adding padding and margins to space out the elements in the dialog. This is the result.

Level 1 logon dialog with minimal styling

Yes it is functional but does it look good? Not really. I’d score it a C and that’s pushing it. The next thing to do is ‘style it up’. Add colours and make it pop. The first thing to do is choose a colour scheme. It’s Summer here in the UK at the moment and the sun is out so I’m thinking orange…let’s get going onto level 2.

Level 2 – style it up

* {
  box-sizing: border-box;
}
body {
  background-color: #ff9800ad;
}
.container {
  align-items: center;
  display: flex;
  justify-content: center;
  height: 100%;
  width: 100%;
}
.log-form {
  position: relative;
  width: 40%;
  background: #fff;
  box-shadow: 0px 2px 5px rgba(0, 0, 0, .25);
}
form {
  width: 100%;
  padding: 2em;
}
h2 {
  color: rgb(255, 87, 34);
  font-size: 1.35em;
  display: block;
  width: 100%;
  text-transform: uppercase;
  padding: 1em;
  margin: 0;
  font-weight: 200;
}
input {
  display: block;
  width: 100%;
  margin-bottom: 2em;
  padding: .5em 0;
  border: none;
  border-bottom: 1px solid #eaeaea;
  padding-bottom: 1.25em;
  color: #757575;
}
.btn {
  display: inline-block;
  background: rgb(255, 87, 34);
  border: none;
  padding: .5em 2em;
  color: white;
  margin-right: .5em;
  box-shadow: inset 0px 1px 0px whitesmoke;
}
.forgot {
  display: flex;
  justify-content: flex-end;
  color: rgb(255, 87, 34);
  font-size: .75em;
  width: 100%;
  transition: color 0.2s ease-in 0s;
}
Level 2 logon dialog, hello sunshine

This is much better. We have a bright background with the dialog centred as before. The dialog has been lifted by introducing a shadow effect around it. The clunky inputs have been styled with a single clean line and the text has been given a deep orange colour to complement the background. The vanilla old skool button has been replaced with a solid orange rectangle with white text. This is all good but look what happens when we navigate around the dialog.

The default blue focus outlines do not look right with the new dialog theme. Luckily we can do something about that.

Level 3 – add some finesse

While the level 2 version of our dialog looked good, but in use the default browser behaviour let it down. So for the next level we are going to provide custom styling to handle the form interactivity. At the same time we are going to add some pleasing animation to really finesse our dialog.

The CSS focus, active and hover pseudo selectors have been added along with animated transitions when moving between state. To highlight the button when it has focus I have added a box-shadow to act as a ‘focus ring’ around the button. The link now adds an underline style when it has focus. Here is the complete CSS.

* {
  box-sizing: border-box;
}
body {
  background-color: #ff9800ad;
}
.container {
  align-items: center;
  display: flex;
  justify-content: center;
  height: 100%;
  width: 100%;
}
.log-form {
  position: relative;
  width: 40%;
  background: #fff;
  box-shadow: 0px 2px 5px rgba(0, 0, 0, .25);
}
form {
  width: 100%;
  padding: 2em;
}
h2 {
  color: rgb(255, 87, 34);
  font-size: 1.35em;
  display: block;
  width: 100%;
  text-transform: uppercase;
  padding: 1em;
  margin: 0;
  font-weight: 200;
}
input {
  display: block;
  width: 100%;
  margin-bottom: 2em;
  padding: .5em 0;
  border: none;
  border-bottom: 1px solid #eaeaea;
  padding-bottom: 1.25em;
  color: #757575;
  transition: border-bottom 0.2s ease-in 0s;
}
input:focus,
input:hover {
  outline: none;
  border-bottom: 1px solid darkorange;
}
.btn {
  display: inline-block;
  background: rgb(255, 87, 34);
  border: none;
  padding: .5em 2em;
  color: white;
  margin-right: .5em;
  box-shadow: inset 0px 1px 0px whitesmoke;
  transition-property: background,box-shadow;
  transition-duration: 0.1s;
  transition-timing-function: ease-in;
  box-shadow: none;
}
.btn:hover {
  background: rgba(255, 87, 34, 0.8);
}
.btn:focus {
  outline:none;
  box-shadow: 0px 0px 2px 2px rgba(191,54,12,0.6);
}
.btn:active {
  background: #bf360c;
  box-shadow: inset 0px 1px 1px #bf360c;
}
.forgot {
  display: flex;
  justify-content: flex-end;
  color: rgb(255, 87, 34);
  font-size: .75em;
  text-decoration-line: none;
  width: 100%;
  transition: color 0.2s ease-in 0s;
}
.forgot:focus {
  text-decoration-line: underline;
  outline: none;
}
.forgot:hover {
  color: rgba(255, 87, 34, 0.8);
}
.forgot:active {
  color: #bf360c;
}

Level 4 – animate it

I though it would be cool to have a bouncing effect when the page is loaded. So the dialog jumps up and bounces ‘down’ onto the screen. This is fairly straightforward to do using animation keyframes.

The animation is referenced in the log-form class by supplying the name and duration. This tells the animation what element it is going to animate.

.log-form {
  animation-duration:1s;
  animation-name: bounce;
  width: 40%;
  background: #fff;
  box-shadow: 0px 2px 5px rgba(0, 0, 0, .25);
}

Here the animation is called bounce, the timing function allows you to define how the animation moves. Details here. Next you define the animation using the @keyframes keyword in your CSS file.

@keyframes bounce {
  0%   { transform: scale(1,1)    translateY(0); }
  10%  { transform: scale(1.1,.9) translateY(0); }
  30%  { transform: scale(.9,1.1) translateY(-100px); }
  50%  { transform: scale(1,1)    translateY(0); }
  57%  { transform: scale(1,1)    translateY(-7px); }
  64%  { transform: scale(1,1)    translateY(0); }
  100% { transform: scale(1,1)    translateY(0); }
}

The @keyframes keyword defines to do and when to do it. So in the example above at 10% we are squatting down to jump by making the dialog shorter and fatter. At 30% the dialog jumps by springing up. In doing so it becomes thinner and taller. At halfway we land on the ground before doing a little bounce after landing. This makes more sense seen in slow motion followed by the faster version.

The transform property allows you to rotate, scale, skew or translate(move) an element. In this case we are using a combination of scale and translate.

Here is the full CSS

* {
  box-sizing: border-box;
}
body {
  background-color: #ff9800ad;
}
.container {
  align-items: center;
  display: flex;
  justify-content: center;
  height: 100%;
  width: 100%;
}
.log-form {
  animation-duration:1s;
  animation-name: bounce;
  animation-timing-function: ease;
  width: 40%;
  background: #fff;
  box-shadow: 0px 2px 5px rgba(0, 0, 0, .25);
}
@keyframes bounce {
  0%   { transform: scale(1,1)    translateY(0); }
  10%  { transform: scale(1.1,.9) translateY(0); }
  30%  { transform: scale(.9,1.1) translateY(-100px); }
  50%  { transform: scale(1,1)    translateY(0); }
  57%  { transform: scale(1,1)    translateY(-10px); }
  64%  { transform: scale(1,1)    translateY(0); }
  100%  { transform: scale(1,1)    translateY(0); }
}
form {
  width: 100%;
  padding: 2em;
}
h2 {
  color: rgb(255, 87, 34);
  font-size: 1.35em;
  display: block;
  width: 100%;
  text-transform: uppercase;
  padding: 1em;
  margin: 0;
  font-weight: 200;
}
input {
  display: block;
  width: 100%;
  margin-bottom: 2em;
  padding: .5em 0;
  border: none;
  border-bottom: 1px solid #eaeaea;
  padding-bottom: 1.25em;
  color: #757575;
  color: rgb(255, 87, 34);
  transition: border-bottom 0.2s ease-in 0s;
}
input:focus,
input:hover {
  outline: none;
  border-bottom: 1px solid darkorange;
}
.btn {
  display: inline-block;
  background: rgb(255, 87, 34);
  border: none;
  padding: .5em 2em;
  color: white;
  margin-right: .5em;
  box-shadow: inset 0px 1px 0px whitesmoke;
  transition-property: background,box-shadow;
  transition-duration: 0.1s;
  transition-timing-function: ease-in;
  box-shadow: none;
}
.btn:hover {
  background: rgba(255, 87, 34, 0.8);
}
.btn:focus {
  outline:none;
  -webkit-box-shadow: 0px 0px 2px 2px rgba(191,54,12,0.6);
  -moz-box-shadow: 0px 0px 2px 2px rgba(191,54,12,0.6);
  box-shadow: 0px 0px 2px 2px rgba(191,54,12,0.6);
}
.btn:active {
  background: #bf360c;
  box-shadow: inset 0px 1px 1px #bf360c;
}
.forgot {
  display: flex;
  justify-content: flex-end;
  color: rgb(255, 87, 34);
  font-size: .75em;
  text-decoration-line: none;
  width: 100%;
  transition: color 0.2s ease-in 0s;
}
.forgot:focus {
  text-decoration-line: underline;
  outline: none;
}
.forgot:hover {
  color: rgba(255, 87, 34, 0.8);
}
.forgot:active {
  color: #bf360c;
}

So there we have it. Zero to hero in four steps. Enjoy!

Binary Search Tree – Java implementation

This data structure is an important one to know. A Binary Search Tree allows you to maintain a data set in sorted order. This in turn allows you efficiently locate data items

This data structure is an important one to know. A Binary Search Tree allows you to maintain a data set in sorted order. This in turn allows you efficiently locate data items. If you would were to use a standard array you would need to sort it every time you add a new data item. Not so with the BST. So with this in mind it must be pretty tricky to implement one, yes? It’s actually quite straightforward. Let us dive in.

Each node in the BST holds some data which can be compared. This is important because it is this that enables data to be chunked or grouped based on the notion that one node is greater than the other.

A simple Binary Search Tree

Our data set looks like this 5,2,6,9,1,3,4.

In the example above 5 is the first item to be added. There is nothing in the tree to start with so the root contains 5. Next up in our list is 2. We would examine the first(root) node and ask ourselves this question “Is our new value greater or less than the current node?”. Here 2 is less than 5 so it is placed in the left-hand side of the root node. Next up is 6. Again we would look at the first node(the root) and determine that 6 is greater than 5 and therefore place it in the right-hand side of the root node. Now our root node has two children 2 and 6. Now if we want to add the value 9 we look at the root, 9 is greater than 5 so we will place it on the right-hand side. However we can’t do this because it already has the value 6. Now we look at node containing 6. 9 is greater than 6 so 9 is placed in the right-hand side of the node that contains 6, and so on…

Using this information we can define a Java class to represent a node in a BST.

public class BSTNode {
    public int data;
    public left;
    public right;
    public BSTNode(int value) {
        data = value;
        left = null;
        right = null;
    }
}

The BSTNode class is very simple. It has three public members. One to hold the data, in this case to keep things easy it is just an integer number for easy comparison. The other two hold the left and right child nodes. When the class is constructed the data value is passed in and set. Now to create the tree class itself and a method to add a value to the tree.

public class BinarySearchTree {
	public BSTNode root;
	public BinarySearchTree() {
		root = null;
	}
	/**
	 * Add a new node
	 * @param value The node value
	 * @return The new node
	 */
	public BSTNode add(int value) {
		// Create the new node
		BSTNode newNode = new BSTNode(value);
		// If there is no root then create and return
		if (this.root == null) {
			this.root = newNode;
			return this.root;
		}
		//Recurse through the tree and find the one where the data should be set
		BSTNode node = nextNode(this.root, value);
		if (value < node.data) {
			node.left = newNode;
		}
		else {
			node.right = newNode;
		}
		return newNode;
	}
	
	/**
	 * Get the next node that has does not have children
	 * @param node The current node
	 * @param value The value to be compared against the node's data value
	 * @return The node
	 */
	private BSTNode nextNode(BSTNode node, int value) {
		boolean leftNode = value < node.data;
		if (leftNode && node.left != null) {
			return nextNode(node.left, value);
		} else if (!leftNode && node.right != null) {
			return nextNode(node.right, value);
		}
		return node;
	}
}

In order to effectively traverse the tree I have implemented a recursive function which drills down into each child comparing the value as it goes and when it find the last correct node that does not have left/right child node it returns. The calling function then sets the correct class member left or right depending on the value.

Let’s test our BST. We are going to add a unit test to ensure that the tree has been built correctly.

import static org.junit.Assert.*;  
import org.junit.Test;
public class TestBinarySearchTree {
	@Test
	public void testAdd() {
		// 5,2,6,9,1,3,4
		BinarySearchTree tree = new BinarySearchTree();
		// Add the root 5
		BSTNode root = tree.add(5);
		assertEquals(root, tree.root);
		// Add 2
		BSTNode node2 = tree.add(2);
		assertEquals(root.left, node2);
		// Add 6
		BSTNode node6 = tree.add(6);
		assertEquals(root.right, node6);
		// Add 9
		BSTNode node9 = tree.add(9);
		assertEquals(node6.right, node9);
		// Add 1
		BSTNode node1 = tree.add(1);
		assertEquals(node2.left, node1);
		// Add 3
		BSTNode node3 = tree.add(3);
		assertEquals(node2.right, node3);
		// Add 4
		BSTNode node4 = tree.add(4);
		assertEquals(node3.right, node4);
	}
}

Here we are checking that each node is placed correctly in the tree.

Easy as PI Weather Station – collecting the data

I inherited a Raspberry PI from a work colleague. It was complete with the Astro PI Hat(now known as the Sense Hat). This marvellous add-on gives a variety of environmental sensors such as temperature, humidity and air pressure.

For a while it sat there on the shelf dusty and forelorn. While doing some work in my shed I wondered whether I could use my PI to gather information and display it on the matrix display. I found a marvellous article on how to create a weather station by John M. Wargo. It is well worth a read. In the article, data is collected at regular intervals from the PI sensors and then uploaded to an external site hosted by weather underground. In no time I had a working weather station. I tweaked the script to show a line graph but it was a little janky because of the low resolution of the display.

Here is the PI in all its glory showing realtime temperature readings in graph form

In this post we are going to do something similar. We are going to collect data but upload it on our own server running a REST API. Then we are going to display this information on a lovely D3 chart. Wait! What? Yes, that is a lot to take in but fear not, this is going to be a 3 part post. The first part? Getting the data from the PI.

Let’s assume we have a fresh PI complete with an Astro Hat. Log in to your PI using puTTY or another application. I connect my PI direct to my laptop using an Ethernet cable but a wireless connection will work as well.

Now in your home directory(in my case /home/pi) create a new directory called collector

mkdir collector

Next use your editor of choice to create a file in the collector directory called collector.py. I use nano so in this case type nano collector.py.

Below is the code for collector.py. I ‘ll skip the first few functions. get_cpu_temp(), get_smooth() and get_temp(). These are used to try and get an accurate temperature reading because the Astro PI hat is affected by the heat given off y the PI CPU. These functions try and make allowances for that. Details here. If you can physically separate your PI using a ribbon cable then you can simply take the standard reading from the humidity sensor as detailed in the Sense Hat API.

#!/usr/bin/python
'''*******************************************************************************************************************************************
* This program collects environmental information from the sensors in the Astro PI hat and uploads the data to a server at regular intervals *
*******************************************************************************************************************************************'''
from __future__ import print_function
import datetime
import os
import sys
import time
import urllib
import urllib2
from sense_hat import SenseHat
# ============================================================================
# Constants
# ============================================================================
# specifies how often to measure values from the Sense HAT (in minutes)
MEASUREMENT_INTERVAL = 5 # minutes
def get_cpu_temp():
    # 'borrowed' from https://www.raspberrypi.org/forums/viewtopic.php?f=104&t=111457
    # executes a command at the OS to pull in the CPU temperature
    res = os.popen('vcgencmd measure_temp').readline()
    return float(res.replace("temp=", "").replace("'C\n", ""))
# use moving average to smooth readings
def get_smooth(x):
    # do we have the t object?
    if not hasattr(get_smooth, "t"):
        # then create it
        get_smooth.t = [x, x, x]
    # manage the rolling previous values
    get_smooth.t[2] = get_smooth.t[1]
    get_smooth.t[1] = get_smooth.t[0]
    get_smooth.t[0] = x
    # average the three last temperatures
    xs = (get_smooth.t[0] + get_smooth.t[1] + get_smooth.t[2]) / 3
    return xs
def get_temp():
    # ====================================================================
    # Unfortunately, getting an accurate temperature reading from the
    # Sense HAT is improbable, see here:
    # https://www.raspberrypi.org/forums/viewtopic.php?f=104&t=111457
    # so we'll have to do some approximation of the actual temp
    # taking CPU temp into account. The Pi foundation recommended
    # using the following:
    # http://yaab-arduino.blogspot.co.uk/2016/08/accurate-temperature-reading-sensehat.html
    # ====================================================================
    # First, get temp readings from both sensors
    t1 = sense.get_temperature_from_humidity()
    t2 = sense.get_temperature_from_pressure()
    # t becomes the average of the temperatures from both sensors
    t = (t1 + t2) / 2
    # Now, grab the CPU temperature
    t_cpu = get_cpu_temp()
    # Calculate the 'real' temperature compensating for CPU heating
    t_corr = t - ((t_cpu - t) / 1.5)
    # Finally, average out that value across the last three readings
    t_corr = get_smooth(t_corr)
    # convoluted, right?
    # Return the calculated temperature
    return t_corr

The main meat is in the, erm, main() function. Here we set up a loop to poll the PI at regular intervals, every 5 seconds so that the smoothing algorithm works effectively. The data is sent to the web service every 5 minutes. This is specified in the global variable MEASUREMENT_INTERVAL.

def main():
    global last_temp
    sense.clear()
    last_minute = datetime.datetime.now().minute;
    minute_count = 0;
    # infinite loop to continuously check weather values
    while True:
        dt = datetime.datetime.now()
        current_minute = dt.minute;
        temp_c = get_temp();
        # The temp measurement smoothing algorithm's accuracy is based
        # on frequent measurements, so we'll take measurements every 5 seconds
        # but only upload on measurement_interval
        current_second = dt.second
        # are we at the top of the minute or at a 5 second interval?
        if (current_second == 0) or ((current_second % 5) == 0):
            message = "{}C".format(int(temp_c));
            sense.show_message(message, text_colour=[255, 0, 0])
        if current_minute != last_minute:
            minute_count +=1
        if minute_count == MEASUREMENT_INTERVAL:
            print('Logging data from the PI')
            payload = {
                'date':dt.strftime("%Y/%m/%d %H:%M:%S"),
                'temperature':round(temp_c,2),
                'pressure':round(sense.get_pressure(),2),
                'humidity':round(sense.get_humidity(),2),
            }
            print(payload)
            # TODO post the results to our server
            minute_count = 0;
        last_minute = current_minute
        # wait a second then check again
        time.sleep(2)  # this should never happen since the above is an infinite loop
    print("Leaving main()")
# ============================================================================
# initialize the Sense HAT object
# ============================================================================
try:
    print("Initializing the Sense HAT client")
    sense = SenseHat()
    sense.set_rotation(90)
except:
    print("Unable to initialize the Sense HAT library:", sys.exc_info()[0])
    sys.exit(1)
print("Initialization complete!")
# Now see what we're supposed to do next
if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        print("\nExiting application\n")
        sense.clear()
        sys.exit(0)

A JSON object is used to hold this data which will look something like this

{
  "date": "2020/05/26 14:01:00",
  "pressure": 1035.2,
  "temperature": 28.36,
  "humidity": 39.82
}

This will be used as the payload to our web service. More to follow in part 2…