High-Speed Web: Start with the End in Mind

Website Speed
Standard

The speed of your website is important to users. But in this age of pay-per-compute, reducing processing all along the chain is important to keep owner costs down too.

Fortunately, in today’s modern web, both of these seemingly competing requirements actually complement each other; reducing processing costs can be achieved by improving site speed.

To explore how to improve site speed and reduce overall processing, lets start with the end in mind and work backwards.

Starting with the customer, what consititutes a ‘good’ user experience, in terms of speed?

Ultimate User Experience?

Imagine you are a first time visitor to your site. The ideal situation would be if an immediate view of your site was instantly available to the device you’re using. No javascript processing, no querying for data, no image optimizations, just the browser showing HTML and [inline] CSS (or even one big snapshot image of your content).

Bear with me, I know some sites need to show dynamic data and some do not, but remember, at this point, you’re just an ordinary user. You dont care about what goes on in the browser and on the server. Neither do I care what static vs dynamic is, or what pain it takes to achieve results. You just want a great experience.

As a user, I want to see the content immediately. By the time I’ve made sense of the initial page (0-3.8secs), I want to interact with it.

If the data a customer is viewing is updated server-side while the page is open, these updates should be pushed to me automagically. Getting new data should not rely on me to fetch the data e.g. by hitting some kind of refresh button to call back to the server.

If the customer leaves the site and comes back to it, by the time I have indicated that I wish to view the page (preemptive rendering?), it should already be fully loaded and no stale data exists on the screen. If any updates have been made to the page since I last saw it, the changes, and only the changes, should have been pushed to my device, using as little bandwidth as possible.

All sounds great? But are these demands even possible using the latest tools/technologies for web and server?

Server Side Rendering

Arguably one of the most important things is to show the content of your site in the quickest way possible.

Generally, the steps that a browser takes to display a web page are as follows:

  1. A request to the server is made for a webpage.
  2. The server decodes this request and the page, and its resources (files), are downloaded.
  3. The web browser uses the page resources to build the page.
  4. The page then is rendered (displayed) to the user.

The bottle-neck in this process are steps 2 + 3 i.e. downloading the resources and ‘building’ the page as the resources become available. Rendering a ‘built’ page, stage 4, is what a browser is really good at.

Can we improve, or even skip, steps 2 + 3 to give users a better experience?

If the customer’s browser isnt going to do the hard work to pull all the resources together to build the page, who’s going to do it then?

Combining and building the page can be perfomed on the server instead. The subsequent ‘built’ page can be the resource that is served to the customer.

Server side rendering is an ‘old’ method of serving web pages. Before client-side Javascript was as powerful as it is today, SRR was a common way to serve dynamic web pages.

What’s changed is that now you can use Javascript on both the client and server. You can now replace PHP, Java, ASP etc on the server with NodeJs. This certainly helps with code reuse between client and server, but are we actually any further forward?

The principles are still the same. Client browser makes call to server, server dynamically creates a webpage containing an initial view of the page, server then delivers the page to client.

Server side rendering certainly solves the processing problem for the customer but we’ve just pushed the problem server-side. Not only have we increased the amount of processing that we, the site owner, has to pay for. We are also not much further in improving the overall speed of the site. Certainly, the customer may see a more complete view of your site sooner. Also, the server may have better access to some of the resources. But the overall amount of processing to build the page stays relatively the same.

If we, as a site owner, are now paying for the processing to ‘build’ an inital view of the site how can we make this process as efficient as possible?

Static First

The vast majority of content on the web changes infrequently. Even sites with ‘dynamic’ content, usually only a small amount of the total content of a page is truly dynamic.

If the same content is going to be viewed by more than one visitor, why generate it more than once? It might not be a big issue if you only have 2 visitors. But even with 11 visitors, you still might be doing 10x more processing than was needed.

If as much of your content can be precompiled i.e. the data has already been fetched, the styles applied and html generated preemptively, it can be delivered to the user quicker. If fact, if the content is already compiled, we can take the server out of the chain completely for this interaction, and allow the browser to access the content directly from a location close to the customer.

The ‘static first’ strategy is to compile a static view of the page first, serve it from a CDN, delay enabing javascript until the page has loaded, then hydrate any stale dynamic data.

By adopting static first you can potentially reduce your hosting costs to cents per month, as opposed to dollars, AND provide a blisteringly fast experience for your customers

But what about pages that are never viewed? To statically generate an entire site, you need to generate and manage updates for all potential routes in your website. You might be generating hundreds or thousands of web pages that real users may never visit. However, although ‘real’ users may not visit these pages, it is likely, and welcome, that at least 1x crawler bot will want to access these pages (unless it is content not for the eyes of search engines).

Caching Vs Static Site

So if having assets ready and available on the network, close to the user, is preferable. Is this not just server caching?

Yes and no. There are a number of different caching options available for your site. You can cache the individual items that are referenced by your page e.g. images, css files, database queries etc, or you can cache the page itself, or both. A static first strategy will try to cut the cord with the server. The strategy does not require database query caching and processes as much into a cachable unit i.e. page caching. Caching is generally performed dynamically i.e. the caching happens when one or more users access a particular page. Static site generation is performed pre-emptively i.e. before any users access a particular page. Caching and static site generation both have the aim of making reused assets as available and as close to a user’s device as possible; the difference is if this is done entirely pre-emptively or dynamically.

Static First, Only Update Dynamic Data

However, frequent updates may be unavoidable depending on the type of site. For dynamic sites, it is not feasible to continually pre-compile all views for all users, especially when the data is changing frequently.

But remember again, your user does not care. Mostly, they dont understand the difference between static and dynamic sites; they want to see the important content, fast.

You can aim to statically compile as much of the page as possible beforehand, but the ‘dynamic’ parts will involve some sort of processing. As a first time user, I may accept having the page load, seeing a placeholder where the dynamic data should be, and then ‘hydrating’ the data on page load. On the other hand, the user may, in this instance, prefer a slightly slower page load, if the page loads with the data already fully ‘hydrated’. The choice probably depends on what makes your customer happiest.

Subsequent Visits & Data caching

Up until now we’ve generally been concentrating on the scenario when customers first visit your site. When a customer visits your site again, the situation is slightly different. A returning customer will already have, on their device, many of the resources for the page. If nothing has changed, they may even already have everything to show the page again.

As a returning user, it makes little sense for me to have to contact the server again, have a new view generated by the server and download a new page. If only a small subsection of the page I already have has changed, this is unnecessary processing.

The ideal situation is if the server actually pushes my browser updates. When this happens my browser doesnt have to continually ask if new data is available. An even better scenario is if the server has already pushed me the data before I open the page again.

Even if you dont consider websockets and/or service workers you still have the opportunity to cache api data on the server. If a piece of data has not changed since the last time your browser (or any other browser) asked the server for it, it introduces unnessecary processing. Not for the feint hearted, but api caching can be achieved using the ETag header of a HTTP call.

Final Note. Software Development is hard.

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

There are lots of difficult things about software development, but cache invalidation is the devil.

To reduce processing, all the methods above require caching in some shape or form. Static website generation is just page caching by another name. Putting things in cache is easy, knowing when to update them is incredibly difficult.

If your website page changes url, should you rebuild your entire site in case other pages reference the changed page? Does this re-introduce more processing than its worth?

If you statically compile your stylesheets inline into the page, what if a stylesheet changes? Does every page need compiled again, even if it doesnt make use of the changed style?

If a property in a row of data in your database changes, how do you invalidate your api cache? What if the data is referenced by another item in the cache?

If you are a masochist and like solving these type of problems, have at it. For the rest of us mere mortals, look to use a tool or service that helps you manage these problems.

Here is a non-exhaustive list of some newer tools and services that help in this space:

Hugo – Static website builder

Shifter – Compile your WordPress site to static

Vercel – JAM Stack + React Server Side Rendering + Hosting

Netlify – JAM Stack + Hosting

Webiny – Almost serverless Static website builder + API Generation

Svelte – Javascript framework that precompiles to plain JS so you dont need to deploy and load a framework with your webapp.

FOSDEM 2020 & Web Fonts Performance

Standard

FOSDEM is a free and opensource conference held in Brussels every year. There is no admission charge to the 2 day event and the topics under discussion are incredibly diverse. If you are interested in anything and everything opensource, FOSDEM has you covered.

This year was my 2nd year at the conference and, yet again, it did not disappoint. Most of my time was spent in the Web performance & Javascript developer rooms which were packed out. I also managed to get to some ethics talks and to the Quantum Computing room (but quickly scarpered out of there when things got complicated).

I learned quite a bit there, but one quick win that I want to share with you was on web font performance. During the Sia Karamalegos talk on web fonts she shared some quick tips on getting your fonts to load quicker. The full talk is embedded below, but the tips are as follows:

  • Preconnect External Fonts. If you are loading external fonts e.g. Google Fonts, a quick performance tip was to use the “preconnect” tag. Using this tag will establish a handshake with the external server before the resource is loaded. With the connection warmed up, it will take less time to download the resource when the time comes. *A small caveat to this, only preload things definately used on the page, otherwise it is doing extra work that might not even be needed.*
/* the usual way of loading google fonts */
<link href="https://fonts.googleapis.com/css?family=Muli:400"
      rel="stylesheet">
Google fonts load waterfall showing wasted time from loading from extra connection time
Wasted Time Loading Google Fonts
/* A better way of loading */
<link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin>
<link href="https://fonts.googleapis.com/css?family=Muli:400"
      rel="stylesheet">
Google fonts load waterfall showing preconnect
Performance Improvement
  • Preload Self-hosted Fonts. If you are loading self-hosted fonts, the tip was to use the “preload” tag to load the fonts quicker. The “preload” tag will fetch a resource (without actually executing it). This means that the files will be downloaded right away and will not have to wait on other resources to be loaded.
/* loading self-hosted fonts */
<link as="font" type="font/woff2"
  href="./fonts/muli-v12-latin-regular.woff2" crossorigin>

<link as="font" type="font/woff2"
  href="./fonts/muli-v12-latin-700.woff2" crossorigin>
Google fonts load waterfall showing local font waiting to load until after CSS
Wasted Time Loading Self-Hosted Fonts
/* a better way */
<link rel="preload" as="font" type="font/woff2"
  href="./fonts/muli-v12-latin-regular.woff2" crossorigin>

<link rel="preload" as="font" type="font/woff2"
  href="./fonts/muli-v12-latin-700.woff2" crossorigin>
Self-hosted waterfall showing preload

Resource Hints

To explain the difference in the “rel” tags, here is a quick cheet sheet from @addyosmani.

Full Video

Please check out the video below for the full talk which also includes tips on FOIT , variable fonts, and tooling.

Full Talk On Font Performance

Firestore Saving User Data

Standard

Cloud Firestore + Authentication FTW

The last post on this site was about how Firebase’s Cloud Firestore was a great option for saving data, especially in new or prototype applications.

In a previous post I also showed how you can use Firebase’s handy authentication mechanism to easily register and authenticate users to your site.

This post will attempt to show how the Firestore and authentication mechanisms can be combined to store user preference information.

Authentication Recap

In the auth post we saw how easy it was to setup and that, when a user authenticates, they have a unique id when you can then use.


firebase.auth().onAuthStateChanged(function(user) {
if (user) {
// User is signed in. uid is now available to use
var uid = user.uid;
}
});

The code above shows how we can watch for when a user logs in. After they have logged in we can then use the uid as a unique identifier for each logged in user. The next step is to start creating some data in our Firestore that we can link to this authenticated user each time they log in.

Save User Related Data

Saving user related data e.g. profile information, with Cloud Firestore is easy. Once you have an authenticated user’s unique identifier you can start to save documents in your Firestore with the UID as an identifier. Each time the user logs in you just look up the document in your Firestore to retrieve the related information about this user/login.


firebase.auth().onAuthStateChanged(function(user) {

var dbUser = db.collection('users’)

.doc(user.uid).set(

{

email: user.email,

someotherproperty: ‘some user preference’

});

});

In the code above we wait for the user to log in, retrieve their UID and create a document in the Firestore using the uid as the index. From this point we can add as many properties to this document that we need, and retrieve the information anytime the user logs in.

Simple stuff! Give Firebase a go, you won’t regret it.

Firebase Cloud Firestore

Standard

NoSQL, huh, whats is it good for?

I am not a huge fan of NoSQL. If you can keep your data structured, you should keep your data structured. Also, I have never, to date, worked on a project where a relational database was not scale-able enough to cater for the amount of data I needed to throw at it. NoSQL databases, like Firestore, have their uses, but for most production projects I’ve worked on so far, a relational database was either the ‘best’ choice or ‘good enough’.

However, sometimes, the structure of the data is unknown or likely to change. One such occasion is when you are starting out on a project. For prototype web applications I have found that NoSQL (Mongo, Firebase etc) can really speed up the initial stages of the project. Firebase’s new Cloud Firestore offering is a really useful NoSQL database for prototyping projects.

Real-time Database Vs Cloud Firestore

Firebase used to call its database offering ‘Realtime Database’ it now has a new offering called ‘Cloud Firestore’. Both are currently supported and available through the console, but it does appear that Firebase are trying to coax users to use the newer, but still in beta, Firestore.

The main difference between the two seems to be the data model. One of the frustrating things about the Realtime database was that you needed to de-normalize your data in order to use the queries effectively. If your data was hierarchical in nature you found yourself having to jump through hoops in order to use the realtime database effectively e.g. because you are always returned the entire sub-tree for a query, you needed to create separate trees for your data to avoid bloat in your result sets.

Cloud Firestore FTW

Firestore is not just any old NoSQL database, it has the following compelling features:

  • Easy to setup. The Firebase docs do a great job of helping you get started in whatever language you use (web, IOs, Android, Node, Go, Python, Java).
  • Easy to use. I have tried to think of a way that the Google team could have made it even easier to use the SDK… but failed.
  • Realtime. Real-time means real-time, try it out, create a simple project, hook your UI up to some data, watch the data change as you change the data in your console.
  • Scalable. So they say, sadly I’ve not had to use the scalability super powers yet.

Recommendation

Even though its still in beta, I would thoroughly recommend choosing Firestore over the realtime database for new projects. For existing projects, which currently use the realtime database, the choice is not as simple. Migrating to the Firestore is not a simple task due to the vastly different datamodels. Take a look at the pros and cons to determine if it is indeed worth the migration effort.

 

In the next blog post I’ll show how you can get started with Firestore and build upon the Authentication post to tie authentication events to saved user data.

 

Firebase Authentication

Standard

Tools For Your Tool-Belt

As a freelance developer its good to have a number of different tools in your tool-belt for when the situation demands. Firebase, and in particular, Firebase Authentication, has proven really useful over the past couple of years.

JWT

Most projects that I work on nowadays require some sort of user authentication. In the past I have used JWT or basic auth as my go-to solutions for authentication. However, even those these methods are fairly straightforward, I still had the nagging feeling that “This is a common problem, there must be a way to make this even easier.”.

JWT is great and easy to get started with but it is not always plain sailing; basic auth is well….. basic. JWT is very similar to Firebase Authentication in that you can use a 3rd party to authenticate users (using the Auth0 service) and not have to set up your own authentication server and process. However, manually persisting tokens on the client side and handling the clean up on expiry, logout etc, still felt verbose to a lazy dev like myself.

Firebase Auth

Firebase Authentication is easy to set up, the wide range of SDKs are easy to use, is free to start using, and “supports authentication using passwords, phone numbers, popular federated identity providers like Google, Facebook and Twitter, and more”.

Most of my experience using Firebase authentication has been in web apps or hybrid mobile app development; below are some really quick steps on how to get started for web:

  1. Sign up for a Firebase account and create a new project.
  2. Choose “Add firebase to your web app” and follow the instructions i.e. include the JS library and add the configuration details to your app.
  3. Add a screen which asks the user for an email address and password. Then pass these details to the “createUserWithEmailAndPassword” SDK function to create a new user.
  4. Add another screen which asks for an email address and password, then pass these details to the “signInWithEmailAndPassword” function. When the function returns you should now have an authenticated user! I could go into more detail on the other methods you can use e.g. “signOut”, but most of them really are self-explanatory and the docs do a great job of showing you how.

Add a login screen with text boxes and a login/register button, then hook the button click to the Firebase SDK methods

Facebook, Google, Twitter, GitHub

To add Facebook, Google, Twitter or Github authentication just:

  1. Enable the required type of authentication through the “Authentication” tab on your Firebase console (following any specific instructions for each provider).
  2. Add a button to your login page for each provider you wish to use.
  3. Hook up the button click to use the appropriate SDK method on the client e.g. “signInWithPopup(new firebase.auth.GoogleAuthProvider())”
  4. After the user has authenticated themselves through the popup window, your call should return and you should now have a [Google] authenticated user.
  5. *A word of caution for hybrid mobile developers* Popups will not work on your hybrid mobile app, you will need to use native Facebook/Google libraries to achieve the same thing e.g. using facebook sign-in for Ionic.

Mobile, federated, logins are waaaaaay more difficult!

Populating Elastic With Your Database

okay-we-get-it-elastic-search
Standard

The Problem

Recently I was tasked with providing fast full-text searching, across multiple tables, for a reasonably big database (~2m records).

Immediately I reached for some sort of Solr or Elastic solution. Being the lazy developer that I am, I chose the path of least resistance (most samples online on how to get started), and chose Elastic [edit: would seriously consider Angolia for my next project, if I had less than 10k records to index].

Getting Elastic set up was relatively simple. Having access to a few AWS credits, I used the AWS ElasticSearch service because I didn’t want the responsibility for maintaining the server.

Now to populate with some data.

Taking data from a database and putting it in an Elastic index must be a common scenario with lots of examples and help, right?

Well, not quite as common as I thought, and not as easy as I was hoping.

I was “hoping” for a nice simple user interface; point at my database, select what data I want, import into Elastic, done. I thought, with the relative maturity of Elastic, that it would be baked into the product, or someone would have created an add-on already. At the least, I thought, there would be a plethora of examples to show how to get data from a database to Elastic.

What I found was lacking, and its the reason why I’m making this post to help others looking to do something similar.

Logstash

The currently recommended solution for indexing database data is Logstash (up until 2015 the recommended way was through Rivers).

logstash

Logstash. Looks great from the diagram

With Logstash you can connect to your data, filter it, transform it, and then add it to Elastic. Great!, just what I was looking.

Logstash takes quite a bit of work to set up and configure though. The setup is fairly straightforward; just download and install on Windows or Linux. After installation, there is no UI with Logstash, just access to a command line command, to which you supply configurations.

Unfortunately Logstash does not have a way to connect with databases out of the box. To connect logstash to your database you will first need to jump through a couple of hoops:

  1. You will need to install the JDBC plugin for Logstash, the instructions on how to do this can be found here.
  2. The JDBC plugin does not include any database drivers so you will need to find the driver you need and then download. I am mostly connecting to Postgres so I need to download the Postgres JDBC driver. Make a note of where you download this driver to, you will need the path to the driver to your configuration later.

Providing you have no issues with your Java installation, you should now be ready to go.

Logstash Configurations

Most of your time will be spent fine tuning configurations. Simple configurations are straight forward i.e. connect to a single table with basic field types and import into Elastic. Anything outside these parameters can be challenging.

A basic configuration to connect to a Postgres database and import into a new Elastic index could look something like this:


# file: simple-out.conf
input {
    jdbc {
        # Postgres jdbc connection string to our database, mydb
        jdbc_connection_string = "jdbc:postgresql://localhost:5432/mydb"
        # The user we wish to execute our statement as
        jdbc_user = "postgres"
        # password
        jdbc_password = "mypassword"
        # The path to our downloaded jdbc driver
        jdbc_driver_library = "/path/to/postgresql-9.4-1201.jdbc41.jar"
        # The name of the driver class for Postgresql
        jdbc_driver_class = "org.postgresql.Driver"
        # our query
        statement = "SELECT * from contacts"
    }
}
output {
    elasticsearch {
        protocol = http
        index = "contacts"
        document_type = "contact"
        document_id = "%{uid}"
        host = "https://urltomyelasticinstance"
    }
}

The configuration file above can be run with the following Logstash command:

"\pathtologstashbindirectory\logstash.bat" -f simple-out.conf
The configuration, when run, will:
  1. Connect to the postgres database.
  2. Get all the data from the ‘contacts’ table.
  3. Connect to your elastic instance.
  4. Create a new index called ‘contacts’
  5. Create a new document type called ‘contact’
  6. Automatically create mappings for the fields in the contacts table.
  7. Populate the index with the data.
Easy, right?
The challenges come when your scenario deviates from this very simple example.
In the next post I’ll detail some of the issues that you might come across when deviating from this simple example.

 

 

 

Tech For Good – DigitalDNA – Hackathon 2017

Standard

Hackathon Success

As part of the DigitalDNA conference in Belfast in early June a hackathon was held in the wee hours around the topic of youth unemployment.

DigitalDNA: Tech Conference Held Annually In Belfast

The challenge was to develop, in teams, a app/service/tool to help alleviate youth unemployment, within 12 hours. The prize? A trip to Dubai to present to a VC firm.

Guess what, our team won, woot!

Domain Experts

At the end of the first day of the conference, while the ‘corporates’ were milling out, the ‘have-a-go techies’ started appearing from the shadows.

Panel Discussions

The hackathon started at 4pm with some invaluable panel discussion from youth workers around the topics involved with youth engagement.

The insight learned from these discussions, and from further engagement with these domain experts, really helped to shape everyone’s thoughts around potential solutions.

Programmer Fwends!

And then, we were ready to go…….almost.

First we needed a team.

Now, I’m a decent programmer, but I know my [many] limitations. Entering a hackathon on my own was not on my radar; I wanted to learn from others experiences and skills in this 12 hour window.

Conor Graham helped to broker team alliances

Luckily, there were others in a similar position, and we quickly formed a team; myself, Luke Roantree & Hussien Elmi (we did have Samir Thapa at the start but unfortunately he had to leave early on and could not return).

From working with Luke’s father at Spatialest and hearing good reports from friends about Hussien at Deloitte, I knew we had a great team.

Loads of Time?

12 hours to bring an idea to life may seem an achievable goal at first sight, that’s only if you have a clear idea in the first place.

Distilling Ideas

Creating and distilling ideas is a time-consuming process, but fortunately the domain experts were on-hand to help. By bouncing ideas off the experts our team managed to agree on an initial direction and set to work.

Ready To Rock

With only about 8 hours to go, we were ready to rock; we were going to build an app.

Hussien Ready To Rock

Myself and Hussien had previously met at an Ionic meetup where I was speaking; using this hybrid mobile app technology was a no-brainer for us. With Ionic you can build cross platform apps very quickly; time was of the essence. Although Luke had not used the technology before we knew he was a whizz at anything he put his mind to.

The Graft

Pumped up on red bull, coffee and pizza all the teams really started getting into their stride around 11pm.

Red Bull (other sleep deprivation agents are available)

After the original chatter in the early evening all the teams were furiously coding away. With the realization rapidly dawning that less than 5 hours were left on the clock, the teams had partitioned out their work and were now working in silos.

Bed!

Funny thing about coffee and Red Bull, what goes up, must come down.

Around 2am the effects of the long day and caffeine started taking its toll. Dreary eyed developers roamed the conference space and focus started shifting away from computer screens to thoughts of bed.

After 3am very few people were left and eventually even our team decided to call it a ‘day’.

After gathering up as many free cupcakes, cold pizza, beer and crisps as we could humanly carry (admittedly it was a lot!), we started to make our way home on foot. We must have looked a random bunch on the Ormeau Road at 3:30am, but to my surprise, not many people batted an eyelid at 3 geeks laden with that many munchies at that time in the morning, go figure!?!?

Presentation Time

The presentations were to take place in the afternoon of the 2nd day of DigitalDNA conference.

Whats my potential hybrid App

By this stage our app had quite a polished feel. Working with collaboration tools such as Trello, GitHub and Slack we had worked well as a team and had managed to produce an immense amount of output in a short period of time.

I had my daughter’s sports day to attend so it was up to the Luke and Hussien to present. They both knocked it out of the park!

The presentation was flawless and the demo was impressive. The judges were impressed not only with how polished the app was but also that the domain experts views’ had been taken on board.

Luke And Hussien Recognized for their hard work

In the end credit must go to all the teams. Every team worked hard on trying to find ways to alleviate the globally transferable issue of youth unemployment.

Kudos also to everyone involved in making the DigitalDNA conference happen. The conference brought together all that is good about the tech scene in Northern Ireland and beyond.

Big thanks also to the organizers and sponsors of the hackathon; it was really well run and we enjoyed every minute of it (even at 3.30 in the morning).

Hopefully all goes well in Dubai with our presentation to Falcon & Associates in November. However, with such a capable team and great mentors from the HackForGood team, we won’t disappoint.

WordPress REST API – Part 4 – Ionic 3

Standard

Ionic 3 + Angular 4

This post follows on from posts about hybrid mobile app development using the new WordPress REST API and Ionic: Part 1, Part 2, Part 3

Part 3 contained a walk through of the Ionic 1 code. What follows is a walk through of the code for an Ionic 3 app.

From following the steps in part 2 By now you should have a [very basic] Ionic app running in your browser. The app will allow you to:

  • Authenticate with your remote, WordPress REST API enabled, website.
  • Make a post from the mobile app to your WordPress site.

Ionic 3 to WordPress

Ionic 3 to WordPress

Pretty simple stuff.

The code that performs the magic is pretty simple too.

Ionic Project Structure (Ionic 3)

Ionic 3 Project Structure

Ionic 3 Project Structure

As you can see from the folder structure below there are quite a few folders in our Ionic App.

However, the important files and folders are as follows:

  • src/app: The 1st component to be shown on screen for the app. Things like navigation routes are configured here.
  • src/pages: The pages for the app are in here i.e. the Login and Report pages.
  • src/providers: This folder contains reusable services to perform business logic e.g. authentication and connection to WordPress
  • ionic.config.json: This file contains configuration options for the app. In Part 2, we changed a setting in this file to point to our WordPress site.

The folder structure is much like any other Angular 4 application, so we will head straight to the code to see what the key lines are:

/app/app.component.ts

As this component is the entry component for this app we configure if the user should be navigated to the Login or Report screens on startup.

In this file we:

  • Start listening to our UserProvider (see user.provider.ts) to see if the logged in/out event is fired.
  • If the logged in event fires navigate to the Report page, otherwise go to the Login page.
constructor(
    private events: Events,
    private userData: UserProvider
  ) {
    // start listening to login and log out events
    this.listenToLoginEvents();
    // check to see if user has logged in before
    // if they have there will be a logged in event fired
    userData.checkedLoggedInStatus();
  }

  listenToLoginEvents() {
    // if the user is logged in then navigate to the Report page
    this.events.subscribe('user:login', () => this.rootPage = Report);
    this.events.subscribe('user:logout', () => this.rootPage = Login);
  }

/src/providers/user.provider.ts

In the previous step we saw that a UserProvider was referenced; this provider is declared in user.provider.ts.

The main job of the UserProvider is to check the username and password in order to receive an authentication token from the WordPress server. This job is outlined in the ‘login’ function.

The login function will:

  • Contact the WordPress server with the username and password to ask for an authentication token.
  • If successful the token will be used in every subsequent http call to the WordPress server. The way we do this is by creating our own http client and injecting the token in every header (see http-client.ts).
  • The authentication token will be stored in case the user comes back to the app at a later date.
  • An event is fired to tell subscribers that the user is now logged in.
// this is a unique token for storing auth tokens in your local storage
  // for later use
  AUTHTOKEN: string = "myauthtokenkey";

  // determine if the user/password can be authenticated and fire an event when finished
  login(username, password) {
    let data = { username: username, password: password };
    // remove any existing auth tokens from local storage
    this.stor.remove(this.AUTHTOKEN);
    // the important bit, contact the WP end point and ask for a token
    this.http.post('/server/wp-json/jwt-auth/v1/token', data).map(res => res.json())
      .subscribe(response => {
        // great we are authenticated, save the token in localstorage for future use
        this.stor.set(this.AUTHTOKEN, response.token);
        // and start using the token in every subsequent hhtp request to the WP server
        this.http.addHeader('Authorization', 'Bearer ' + response.token);
        // fire an event to say we are authenticated
        this.events.publish('user:login');
      },
      err => console.log(err)
      );
  }

/src/providers/http-client.ts

The HttpClient class has a very simple purpose; inject our authentication header into every Get and Post operation we make to the WordPress server.


// inject the header to every get or post operation
  get(url) {
    return this.http.get(url, {
      headers: this.headers
    });
  }
  post(url, data) {
    return this.http.post(url, data, {
      headers: this.headers
    });
  }

/src/providers/word-press-provider.ts

The WordPress provider has a single function: try to post the score and report data to WordPress.

The ‘createReport’ function:

  • Sends a message to the app to tell it that a save operation has started (for the purpose of showing spinners etc).
  • Sets the JSON data required by the WordPress REST API posts call (the full range of options can be seen here).
  • Posts the information to our WordPress website.
  • Gets back a success/failure message.
  • Lets the App know we have finished the save.
createReport(score: string, report: string) {
    // let the app know we have started a save operation
    // to show spinners etc
    this.events.publish('wordpress:savestatus', { state: 'saving' });
    // set the JSON data for the call
    // see https://developer.wordpress.org/rest-api/reference/posts/#create-a-post for options
    let data = {
      title: score,
      excerpt: report,
      content: report,
      status: 'publish'
    };
    // the important bit, make a request to the server to create a new post
    // The Authentication header will be added to the request automatically by our Interceptor service
    this.http.post('/server/wp-json/wp/v2/posts', data).subscribe(data =&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
      // tell the app that the operation was a success
      this.events.publish('wordpress:savestatus', { state: 'finished' });
      this.events.publish('wordpress:createdreport');
    }, error => {
      this.events.publish('wordpress:savestatus', { state: 'error', message: error });
    });
  }

/pages/login/login.ts

The login page is very simple. When the login button is pressed, the username and password input in the textboxes are passed to our UserProvider service.


username: string;
  password: string;
  constructor(public UserProvider: UserProvider) {
  }
  login() {
    this.UserProvider.login(this.username, this.password);
  }

/pages/report/report.ts

Again the report page is very simple. When the report button is pressed, the score and report input in the textboxes are passed to our WordPress service.

constructor(private events: Events, private wordpress: WordPressProvider, private user: UserProvider, private loadingCtrl: LoadingController, private toastCtrl: ToastController) {
    this.createLoader();
    this.listenToWordPressEvents();
  }
  createReport() {
    this.wordpress.createReport(this.score, this.report);
  }

 

Any Questions?

That’s basically all there is to it.

If you have any questions, or any amendments that I can make to the Github repo, then please comment below….

Scaling WordPress

Standard

*In this post I will attempt describe how you can prepare your WordPress site so that it can be scaled.

WordPress is generally not scalable

The usual deployment scenario is to have the database, core files, user files and web server, all on the same server.

In this scenario, the only scaling that can be performed, is to keep adding memory and power to a single server (vertical scaling).

Scaling Options

‘Scaling vertically’, which is also called ‘scaling up’, usually requires downtime while new resources are being added and encounter hardware limits. When Amazon RDS customers need to scale vertically, for example, they can switch from a smaller to a bigger machine, but Amazon’s largest RDS instance has only 68 GB of memory.

‘Scaling horizontally’ means that you add scale by adding more machines to your pool of resources. With horizontal scaling it is often easier to scale dynamically by adding more machines into the existing pool.

Horizontal vs Vertical Scaling

Horizontal Scaling For WordPress

To enable horizontal scaling for WordPress we need to first decouple the various parts from each other (web server, files and database). Once decoupled, we can place each part on separate servers and scale them as necessary.

Decouple Before Scaling

Decoupling the database is ‘easy’. You can easily move your WordPress database to its own server by taking a backup and migrating (guess what, there’s a plugin for that). Once moved, you just need to change the ‘DB_HOST’ value in your wp-config file.

Decoupling the web server from the files (WordPress core files and media library files) is more complicated. From what I can gather there are 2 options for doing this:

  1. Host only your library files on a remote cloud server. This article explains how to achieve this.
  2. Host both your core files and library files on a mounted volume linked to a remote cloud server. This article explains how to link your uploads folder to cloud storage, but you equally link your entire WordPress folder.

I prefer option 2 because:

  • Option 1 keeps your core files coupled to the web server.
  • It also only caters for new uploads, not existing files too.
  • In addition, it involves rewriting the url links for your files.
  • Option 2 keeps your url links the same so you revert back or change cloud provider easier.

However option 2 is way more complicated to set up. In my next post I will show how Docker can help with this process.

Decoupling complete, now scale!

You can now scale your WordPress site.

In the next post I will show how to use Docker, in particular Docker Cloud, to make this process painless.

Ionic 2 First Impressions

Standard

Even More Productivity?

After experiencing the immense productivity gains from using Ionic for mobile development, surely using Ionic 2 would be even more productive, right?!!?.

Sadly. No. Not yet, at least.

I had hoped to be writing a post telling everyone how awesome Ionic 2 is.

I had hoped to be falling over myself to tell you about how quickly it was to get started, the incredible workflow, the intuitiveness of the framework, the terseness of the code.

Instead, what follows is a moan colored by disappointment and frustration from using Ionic 2, Typescript & Angular 2.

angular-2-ionic-android

Ionic, Angular 2 And Typescript

Ionic is a cross platform mobile app framework. It uses Html & JavaScript (Angular) to create mobile apps that can be run on Android and/or IOs.

Ionic 2 is more of the same, but with a few notable differences. You will, inevitably, be using Angular 2 and Typescript to create your apps.

Angular is a JavaScript framework which makes creating single page web apps easier than just using plain JavaScript. Angular 2 is google’s second attempt at the framework, and a significant departure from the first.

Typescript is a superscript of JavaScript. It allows you to write tidier, more elegant, maintainable, JavaScript.

Now, both Angular 2 and Typescript are great in my book, but they are ‘new’. This leads me to my first gripe about the Ionic 2 stack, the learning curve:

learning-curve

Gripe 1: The Learning Curve

There are several steep learning curves involved with the Ionic 2 stack.

Typescript is a big departure from vanilla JavaScript. The syntax should not be alien to anyone who currently uses ES6/ES2015 but it will still take some getting used to.

The other steep learning curve is with Angular 2. Although the syntax in Angular 2 is more elegant and easier to learn than Angular 1, they both have steep learning curves.

Also, there are also changes with Ionic 2 which are a departure from Ionic 1 which you must learn. However these changes are not as severe as the changes in the Angular 2 and using Typescript.

 

beta-meme-generator-this-is-beta

Gripe 2: Beta

Ionic 2 is a beta.

Angular 2 is just out of beta.

Typescript is relatively new.

If you regularly program via stack overflow ‘copy and paste’ you will be frustrated by the lack of help and the ‘out-of-date’ help that you find.

slow-computer_o_1349099

Gripe 3: The Showstopper: Transpiling & Bundling

Ok, here’s the thing that’s eating me most: transpiling & bundling.

This is not a gripe with Ionic 2 per se; it is a gripe about modern web development.

Before, with Ionic 1, using the web tools to debug your app, feedback was instant. You made a change to your code and the changes were reflected immediately on screen.

Now, with Ionic 2, when you save, all your code needs compiled from typescript to something the browser can understand. On top of that, Ionic 2 now bundles your application for use in the browser every time a change, no matter how small, is made.

The upshot, is that when you save even a small project, the wait before you get feedback is greatly increased. I have spent my time waiting 30 seconds upwards waiting for a simple change in my JavaScript to take effect on screen!.

(Maybe I am using it wrong? Please, please, correct me if I am.)

This is a BIG problem for me. The thing I liked most about Ionic 1 (and web development in general) was how immediate the feedback loop was.

Things were so much more productive in the web world compared to using compiled languages. Working with a scripting language had its issues, but it felt so productive not having to wait on the compiler to finish.

Transpilers & bundlers have added a compile step to web development which affects productivity.

opinion

Largely due to my opinion on the frustrating feedback loop with Ionic 2, I will not be using Ionic 2 for new projects yet.

Sure, there are a lot of benefits from using Typescript to create a maintainable code base and productivity gains from using using Angular 2, but these are currently outweighed by the negatives.

Ionic 1 is still more productive for me and I will be continuing to use it in the near future.

As always, this is just my opinion. If I am missing something obvious, or have got this horribly wrong, please correct me in the comments below….