Svelte: Web development made simple

Standard

*this post is the 4th of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Svelte (and its application framework SvelteKit)

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In a previous post, I explored why a trio of technologies—Supabase, Vercel, and Svelte (along with its framework, SvelteKit)—constitute my ideal tech stack presently.

This time, I’m turning the spotlight on Svelte and SvelteKit. Both have rekindled my passion for front-end development, presenting a fresh perspective on building interactive web applications.

The Pursuit of Developer Happiness

I’ve considered myself to be a “full stack developer” for a long time, but the front-end web development part of the stack was always my “weak” area. I always neglected this part of my arsenal because it didn’t bring me joy. Doing “more with less” is the mantra that brings happiness to my work, and web development, up until recently, just seemed to get in the way of my productivity.

Recently, I’ve fallen back in love with front-end development due to Svelte (and SvelteKit).

It Should’nt be this Difficult

Over the years I’ve seen websites and apps become more complicated.

I’ve gone through several iterations of JavaScript frameworks, tools and libraries to catch up with this complexity. Over these iterations, unfortunately, the developer experience (DX) has not significantly improved and has become arguably worse.

The situation is exacerbated by the need to support an ever-growing array of devices and screen sizes, requiring responsive and adaptive design techniques.

User expectations for rich, app-like experiences have soared, demanding more sophisticated front-end and back-end logic, including real-time interactions and improved performance.

Additionally, the importance of SEO, accessibility, and security has led to yet further layers of complexity.

Lastly, development workflows have become more intricate. Developers need to be aware of CI/CD pipelines, cloud-based services, and the necessity for cross-disciplinary skills spanning design, development, and deployment.

All these factors have contributed to a landscape where web development demands a broader, more sophisticated skill set.

Developers need to constantly have a deeper understanding of a rapidly changing technology ecosystem.

Using an “opinionated” application framework has helped me to manage these complexities and increase my productivity.

For complex web apps, vanilla JavaScript is arguably always an option. However, I can imagine being highly frustrated by the verbosity of the resulting code. To reduce the amount of code, I could imagine ending up writing a pseudo framework to abstract most of the verbosity. For those who dare, I salute you. But I’d personally rather skip straight to using a battle-tested framework to reduce code complexity.

However, not all frameworks are created equal.

For me, an ideal framework is one that can do more with less code without compromising on flexibility and speed. A great developer experience is also a must to maximize developer productivity.

In Svelte, and its application framework SvelteKit, I feel that I’ve got a great mix.

My Svelte Discovery: From Experimentation to Production

My adventure with Svelte began as a weekend experiment.

After liking a recent foray into using VueJS, I was curious to explore a framework that compiled down to vanilla JS but promised a declarative coding experience.

The initial trial was for a small project aimed at enhancing a local community initiative. The objective was straightforward: deliver an engaging, performant web experience with minimal overhead for both the developers and the end-users.

To my delight, the project was not only a success in terms of its community impact but also a revelation in web development efficiency and simplicity.

Svelte, coupled with SvelteKit for seamless full-stack integration, transformed how I now approach front-end development, leading me to adopt it for several subsequent production projects.

Workflow Integration and Development Joy

Svelte’s integration into the developer workflow is seamless.

Its component-based architecture—enhanced by reactivity and compile-time optimizations—fits perfectly with modern development practices.

Furthermore, SvelteKit enriches this experience by offering a convention-over-configuration approach to building applications, from static sites to SEO-friendly SSR (Server-Side Rendered) applications and everything in between.

This framework has a unique proposition: write less code, without losing expressiveness or functionality.

For a developer who values both productivity and performance, Svelte’s proposition is incredibly appealing.

Unified App and API Development

A pivotal moment for me was utilizing SvelteKit’s ability to cohesively handle application logic, SSR, and backend APIs within the same project repository.

SvelteKit’s file-based routing and server-side capabilities mean that an application’s front end and its backend API can live side by side. This co-location streamlines the development process, especially for small teams or projects where agility and speed are paramount.

Performance Outsourced

SvelteKit, the application framework built on top of Svelte, offers several performance advantages over other JavaScript frameworks. These advantages can make it an attractive choice for developers focused on building highly efficient and fast web applications.

Here are some of the notable performance benefits:

  1. Compilation Step: Unlike frameworks that rely heavily on runtime interpretation (like React or Vue), Svelte moves much of the work to compile time. This means Svelte compiles components into highly optimized vanilla JavaScript at build time, reducing the need for a heavy runtime library. As a result, the final code shipped to the browser is leaner and faster to execute.
  2. Less Boilerplate Code: Svelte’s design philosophy emphasizes simplicity and minimalism, resulting in less boilerplate code. This not only makes development faster but also leads to smaller bundle sizes, which directly impacts load times and performance.
  3. Built-in Page Routing and SSR: SvelteKit comes with built-in support for page routing and server-side rendering (SSR). SRR can significantly improve the performance of web applications. SSR ensures that pages are rendered quickly on the server, reducing the initial load time. Client-side routing allows for seamless navigation without reloading the page. All this helps to create a smoother user experience.
  4. Efficient Reactivity Model: Svelte’s reactivity model is designed to be very efficient. It updates the DOM directly when the state changes, without the need for a virtual DOM diffing algorithm. This results in faster updates and interactions, as there is less computational overhead involved in making UI changes.
  5. Integrated Tooling: SvelteKit offers an integrated development environment. With features like hot module reloading, development time is sped up and optimizations are easier to implement.

These advantages make SvelteKit an appealing option for developers prioritising performance and efficiency in their web projects. However, the best choice of framework also depends on specific project requirements, existing developer skill sets, and other factors.

However, for myself, working with small team greenfield startups, the choice has been simple.

Embracing Svelte: A Testament to Developer-Focused Design

In conclusion, my journey with Svelte and SvelteKit has been nothing short of transformative.

These tools have not only simplified the development process and codebase size. Using Svelte has also resulted in superior end-user experiences—fast, engaging, and accessible web applications.

For developers and teams navigating the complexities of modern web development, looking for a balance between productivity, performance, and user experience, Svelte and SvelteKit offer a compelling, developer-friendly pathway.

*this post is the 4th of a 4 part series of posts exploring Supabase, Vercel & Svelte.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Vercel: Developer-focused, powerful & cost-effective

Standard

*this post is the 3rd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Vercel

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In a previous post, I outlined why Supabase, Vercel & Svelte make up my current ideal technology stack. This post delves deeper into Vercel and why it has helped me fall in love with full-stack development again.

Productivity as a Priority

As a “seasoned” software engineer, I’ve battle-tested numerous deployment & hosting platforms.

I am always on a quest for a more streamlined, developer-friendly approach. The goal is always to give developers space to focus on the things that really matter.

That search has led me, again and again, to Vercel in the past few years.

Here’s my personal take on why Vercel has become an indispensable tool in my development stack.

Discovering Vercel: Weekend project to production apps

My journey with Vercel began during a weekend project launching a web app for a local charity.

The goal was to deploy an easy-to-use and maintained web application. The app should be created with minimal fuss and minimal cost to the charity.

Over the course of the weekend, I was able to create a web app & CICD pipeline at Zero ongoing cost to the charity.

Since then I’ve used Vercel on a number of other production projects.

Seamless integration with developer workflows

The biggest selling point of Vercel is its seamless integration with developer workflows.

An estimated +100m developers incorporate GitHub somewhere in their workflow, I am one of those 100m.

Vercel hooks easily into your GitHub repo. After linking to GitHub, vercel automatically recognises what type of project you’ve got going on. Vercel then automatically builds and deploys your web-based and/or node project every time a branch is pushed.

It may sound like a simple thing, but it saves a huge amount of time, complexity, and expense for a significant number of web & node developers.

This single feature, coupled with a host of other performance and developer-focused features, makes Vercel highly persuasive for web and node developers.

Develop App & API together

A big game changer for me was the ability to develop & deploy my [web] apps and APIs at the same time.

API and APP in the same repo

If you include an “API” folder in your repo, Vercel “automagically” spins up a serverless function to serve the endpoints.

Being able to develop an App and API in the same codebase is a big win for smaller startup teams where the code is changing rapidly.

There is no disconnect between the two; if a new feature or API route is deployed, they are deployed with the front-end changes and not out of sync.

Even better, if you are developing with Node for your API, and a JavaScript framework for the front end, the same developer can understand and update both aspects.

Performance as someone else’s problem

Vercel doesn’t just deploy your application; it helps optimize and scale it.

Leveraging global content delivery networks (CDNs) and smart caching strategies helps your application load fast, regardless of where your users are.

Whether your webpage or API experiences a handful of users or a sudden surge in traffic, performance remains consistent.

This peace of mind, knowing that scalability and optimization aren’t something I have to actively manage, allows me to focus on other, client-centric, tasks.

Vercel Pricing (March 2024)

The Cost

The pricing tiers underline the feeling that Vercel was built for developers, by developers.

For smaller teams, being able to quickly try out ideas at no cost, allows them to iterate at speed.

Not having to raise a purchase order or ask engineering for some server resources, just to create a proof of concept, is perfect for startups and small teams.

Even when you get past the proof of concept stage, the pricing is extremely persuasive for small and large teams alike.

Embracing the Future with Vercel

My experiences with Vercel have made it a staple in my development workflow.

The ease of use, performance optimizations, and scalability it offers are fantastic for the types of projects I regularly work on (small team start-ups).

For any development team, in a similar situation, looking to streamline their development & deployment processes, I can’t recommend Vercel highly enough.

*this post is the 3rd of a 4 part series of posts exploring Supabase, Vercel & Svelte.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Supabase: Making development easy, scalable & affordable

Standard

*this post is the 2nd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Supabase

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In the previous post I outlined why Supabase, Vercel & Svelte make up my current ideal technology stack. This post delves deeper into Supabase and why it has helped me fall in love with full-stack development again.

What is Supabase?

Supabase is a managed service which encompasses (but is not limited to) all of the following: authentication, database, file storage & serverless functions.

Supabase is like other “backend as a service” (BaaS) offerings like Firebase, but with a few notable differences; the project is open-source & is centred around an open-source relational database (Postgres).

What makes it so useful?

As I have lamented in the past, app development is complicated. Anything that reduces stack complexity can help focus developers on the things that really matter.

I tried Supabase for a weekend project for a local charity and achieved so much in a single weekend that I would consider myself an advocate for the product.

Following that experience, I have now used Supabase successfully for two additional production projects and plan to use it in the future for similar scenarios (small team startups).

Advocating Supabase at a JavaScript meetup. Slides below…

Creating a relatively simple app over a weekend is not a huge accomplishment. There are other services and no-code platforms that can do something similar in the same timescales.

However, experience has taught me to get into the weeds with a product and then extrapolate into the future to gauge the real value of a tech stack. Low code and no-code tools are great, but at some point, in a growing project, you will hit a wall.

What makes Supabase stand out is that coupled with other developer tools like Svelte, it can be at least as productive as no-code tools without the drawbacks e.g. vendor lock-in, limited customization, up-front costs and scalability.

Embracing Open-Source and Community

My gravitation towards Supabase is also influenced by its open-source ethos which promotes transparency, collaboration, and community-driven innovation.

Being open to open source is more than just being idealistic, it’s also pragmatic.

The Supabase project is open source e.g. the code that runs its managed service can be downloaded and used on a server of your choosing.

If Supabase decides to increase the managed service cost to a level where it no longer makes sense to use it, you can manage the services yourself elsewhere.

Supabase has been completely transparent about its open-core business model from the start, hopefully, this model continues to work for them.

However, relying on open-source projects is not without potential pitfalls, especially when open-source companies’ heads get turned by greedy VCs and start over profiteering.

At one time, Elastic was my tool of choice for multi-faceted search, but the change in licence by the company has left a bad taste.

However, even though open-source licences can change, it is still better than the closed-source alternative where you are completely at the vendor’s whims from day one.

Simplifying the Complex

Creating apps is a complicated process even without having to worry about managing servers.

Delegating responsibility for managing auth, database, and storage to a managed service allows small teams to concentrate on more impactful concerns.

Not only does Supabase take these concerns away from you but it does it all in an easy-to-use dashboard.

The developer experience in general has been, dare I say it, enjoyable.

Using the Supabase tools and libraries has successfully reduced the complexity and lines of code in my apps.

The Security Model: Easy to Understand

The simplicity of the row-level security model in PostgreSQL is easy to configure and understand.

It presents a straightforward yet robust framework that drastically reduces the risk of misconfiguration—making security accessible to all of the team, even for newcomers, from day one.

However, it’s not perfect.

I have had experience in the past with different approaches to securing data. My least favourite way in the past was to implement the security rules totally in code i.e. lots of if/then statements hidden away in code that only the core developing team could understand or change.

In contrast, in my opinion, the “best” way I have experienced is to use declarative authorization rules, defined in the data schema e.g. Amplify authorization rules.

In the example below, any user can read from the “Todo” table/graphql type, but only the person who created the row can update or delete their own data.

## Configure schema and auth rules
## in one place 
type Todo 
  @model 
  @auth(rules: [{ allow: public, operations: [read] }, { allow: owner }]) 
  { 
     content: String 
  }
## Implementing something similar
## using Postgres/supabase
create policy "Allow select, update and delete for users based on id" on "public"."Todo" as permissive for all to public using ((auth.uid() = id));
 
create policy "Read for all users" on "public"."Todo" as permissive for select to public using (true);

It would be great if Supabase could cater for the type of declarative security as above, if anyone knows if it can, please reach out.

Scalability and Performance: Meeting Tomorrow’s Needs Today

Premature optimization is the root of all evil, let someone else grapple with the demon

In the past, I have spent countless hours trying to eek out marginal gains in performance in case my app goes viral. Spoiler alert… it didn’t… and I’ll never get those hours back.

Let someone else (with probably more expertise) obsess about performance and scalability.

Supabase’s seamless scalability ensures that as you grow, your backend does too—smoothly and reliably. This peace of mind allows you to focus your energies on innovation and enhancing user experience, secure in the knowledge that your technological foundation is a given.

The Cost-Effectiveness of Dreaming Big

Free and Pro pricing (March 2024)

In the world of startups, where every resource counts, Supabase’s pricing model is perfect.

The free tier is generous enough to battle-test your idea. The follow-on tiers are predictable and fair.

It’s not just about infrastructure costs where Supabase shines. The comprehensive savings in developer hours it enables through its exceptional developer experience is significant.

Again, this efficiency allows you to channel resources into areas that directly amplify user value and platform growth.

A Comparison with the Giants

In my career, I have used other back-end-as-a-service offerings and Supabase compares favourably for the projects I’ve been doing lately i.e. small team startup.

I have used all of the following comparable technologies in production environments: Firebase, Retool, AWS Amplify, Budibase.

I have tried, but not implemented the following tools: Planetscale

I have not tried, but want to look at, the following: Parse, NHost, Backendless, AppWrite

My advice, if any is needed, is to look at your particular situation and try out any or all of the tools above on a pet project.

The “try out” part is key, all these sites have wonderful marketing websites which promise the earth. It’s not until you get down into the weeds on developer experience and pricing that the suitability becomes clearer.

*this post is the 2nd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Supabase

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

The Power Trio: Svelte, Supabase, & Vercel – My [current] Tech Stack of Choice

Standard

*this post is the 1st of a 4 part series of posts. This post gives a quick overview of Svelte, Vercel & Supabase, the following posts will go deeper into the technologies.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Choosing the right stack for your project is akin to setting the foundations of a building. It needs to be solid, reliable, and scalable.

As a software engineer for over 26 years, I’ve finally found a stack that feels like home: Svelte, Supabase, and Vercel.

This trio has not only supercharged my own productivity but has also proven indispensable in my role as CTO of a small startup. When every decision weighs heavily on our budget and future, finding a stack that offers ease of use, scalability, reliability & cost-effectiveness, has been thoroughly reassuring.

Supabase, Svelte & Vercel

Why SvelteKit, Supabase, & Vercel?

Each component of this stack brings something unique to the table.

Svelte, with its simplicity and speed, allows us to build web applications that are incredibly fast and easy to maintain. It eliminates the complexity typically associated with front-end development. It makes the developer experience delightfully smooth. This has been a game-changer for us. In a startup environment resources are limited, and we need to move quickly without sacrificing quality.

Then there’s Supabase, an open-source Firebase alternative, which has been a revelation. It offers the backend services we need – authentication, database, and storage – without the overhead of managing these systems ourselves. Its PostgreSQL foundation means we’re building on top of a powerful, open-source database. Not only that, its easy-to-use APIs save us countless hours that would otherwise be spent on backend development.

Vercel provides a seamless deployment and hosting solution that integrates perfectly with SvelteKit and GitHub. Its global CDN ensures our applications are fast, no matter where our users are. Its commitment to developer experience makes deploying our applications as simple as a git push. In the fast-paced environment of a startup, Vercel’s scalability and ease of use are invaluable.

Productivity Gains

The synergy between Svelte, Supabase, and Vercel has significantly boosted our productivity. The reduction in context switching, the streamlined development process, and the ease of deployment means we can go from idea to production incredibly fast. In a small startup, where each member often wears multiple hats, being able to focus more on solving our users’ problems and less on the intricacies of our tech stack is a massive advantage.

Scalability for Startups

For any early-stage startup, the ability to scale efficiently is critical. This stack ensures that we’re not just building for the present but are also prepared for future growth. Supabase and Vercel, in particular, offer scalable solutions that grow with us. Both ensure that we can handle increased loads without a hitch (and without surprise bills). This peace of mind allows us to focus on innovation and delivering value to our users, rather than worrying about our infrastructure.

A Personal Reflection

My personal journey through the realms of large corporations and startup agencies has taught me the importance of choosing the right tools. In the past, I’ve dealt with the complexities of custom builds and the challenges of managing primitive services on platforms like AWS, Azure & GCP. While powerful, they often require a significant investment in time and resources to manage effectively.

In my current role, where the margin for error is slim, and our budgets are tight, the simplicity, efficiency, and scalability of the Svelte, Supabase, and Vercel stack have been a blessing. It’s a setup that supports rapid growth and innovation, aligning perfectly with the transparent, agile, forward-thinking ethos of our startup.

My 26 years of experience across different spectrums of the tech industry has solidified my belief that software development is hard, the tech stack should make it easier, and abstracting the difficult parts away to scalable services should always be considered.

With this particular tech stack, I have personally found a great balance between interoperability, scalability and extensibility. A word of caution though, what works for one team does not necessarily mean its right for others. However, for our situation, it’s been a testament to how the right set of tools can not only enhance productivity but also empower a team to focus on what truly matters – creating value for the users.

*this post is the 1st of a 4 part series of posts. This post gives a quick overview of Svelte, Vercel & Supabase, the following posts will go deeper into the technologies.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Firestore Saving User Data

Standard

Cloud Firestore + Authentication FTW

The last post on this site was about how Firebase’s Cloud Firestore was a great option for saving data, especially in new or prototype applications.

In a previous post I also showed how you can use Firebase’s handy authentication mechanism to easily register and authenticate users to your site.

This post will attempt to show how the Firestore and authentication mechanisms can be combined to store user preference information.

Authentication Recap

In the auth post we saw how easy it was to setup and that, when a user authenticates, they have a unique id when you can then use.


firebase.auth().onAuthStateChanged(function(user) {
if (user) {
// User is signed in. uid is now available to use
var uid = user.uid;
}
});

The code above shows how we can watch for when a user logs in. After they have logged in we can then use the uid as a unique identifier for each logged in user. The next step is to start creating some data in our Firestore that we can link to this authenticated user each time they log in.

Save User Related Data

Saving user related data e.g. profile information, with Cloud Firestore is easy. Once you have an authenticated user’s unique identifier you can start to save documents in your Firestore with the UID as an identifier. Each time the user logs in you just look up the document in your Firestore to retrieve the related information about this user/login.


firebase.auth().onAuthStateChanged(function(user) {

var dbUser = db.collection('users’)

.doc(user.uid).set(

{

email: user.email,

someotherproperty: ‘some user preference’

});

});

In the code above we wait for the user to log in, retrieve their UID and create a document in the Firestore using the uid as the index. From this point we can add as many properties to this document that we need, and retrieve the information anytime the user logs in.

Simple stuff! Give Firebase a go, you won’t regret it.

Populating Elastic With Your Database

okay-we-get-it-elastic-search
Standard

The Problem

Recently I was tasked with providing fast full-text searching, across multiple tables, for a reasonably big database (~2m records).

Immediately I reached for some sort of Solr or Elastic solution. Being the lazy developer that I am, I chose the path of least resistance (most samples online on how to get started), and chose Elastic [edit: would seriously consider Angolia for my next project, if I had less than 10k records to index].

Getting Elastic set up was relatively simple. Having access to a few AWS credits, I used the AWS ElasticSearch service because I didn’t want the responsibility for maintaining the server.

Now to populate with some data.

Taking data from a database and putting it in an Elastic index must be a common scenario with lots of examples and help, right?

Well, not quite as common as I thought, and not as easy as I was hoping.

I was “hoping” for a nice simple user interface; point at my database, select what data I want, import into Elastic, done. I thought, with the relative maturity of Elastic, that it would be baked into the product, or someone would have created an add-on already. At the least, I thought, there would be a plethora of examples to show how to get data from a database to Elastic.

What I found was lacking, and its the reason why I’m making this post to help others looking to do something similar.

Logstash

The currently recommended solution for indexing database data is Logstash (up until 2015 the recommended way was through Rivers).

logstash

Logstash. Looks great from the diagram

With Logstash you can connect to your data, filter it, transform it, and then add it to Elastic. Great!, just what I was looking.

Logstash takes quite a bit of work to set up and configure though. The setup is fairly straightforward; just download and install on Windows or Linux. After installation, there is no UI with Logstash, just access to a command line command, to which you supply configurations.

Unfortunately Logstash does not have a way to connect with databases out of the box. To connect logstash to your database you will first need to jump through a couple of hoops:

  1. You will need to install the JDBC plugin for Logstash, the instructions on how to do this can be found here.
  2. The JDBC plugin does not include any database drivers so you will need to find the driver you need and then download. I am mostly connecting to Postgres so I need to download the Postgres JDBC driver. Make a note of where you download this driver to, you will need the path to the driver to your configuration later.

Providing you have no issues with your Java installation, you should now be ready to go.

Logstash Configurations

Most of your time will be spent fine tuning configurations. Simple configurations are straight forward i.e. connect to a single table with basic field types and import into Elastic. Anything outside these parameters can be challenging.

A basic configuration to connect to a Postgres database and import into a new Elastic index could look something like this:


# file: simple-out.conf
input {
    jdbc {
        # Postgres jdbc connection string to our database, mydb
        jdbc_connection_string = "jdbc:postgresql://localhost:5432/mydb"
        # The user we wish to execute our statement as
        jdbc_user = "postgres"
        # password
        jdbc_password = "mypassword"
        # The path to our downloaded jdbc driver
        jdbc_driver_library = "/path/to/postgresql-9.4-1201.jdbc41.jar"
        # The name of the driver class for Postgresql
        jdbc_driver_class = "org.postgresql.Driver"
        # our query
        statement = "SELECT * from contacts"
    }
}
output {
    elasticsearch {
        protocol = http
        index = "contacts"
        document_type = "contact"
        document_id = "%{uid}"
        host = "https://urltomyelasticinstance"
    }
}

The configuration file above can be run with the following Logstash command:

"\pathtologstashbindirectory\logstash.bat" -f simple-out.conf
The configuration, when run, will:
  1. Connect to the postgres database.
  2. Get all the data from the ‘contacts’ table.
  3. Connect to your elastic instance.
  4. Create a new index called ‘contacts’
  5. Create a new document type called ‘contact’
  6. Automatically create mappings for the fields in the contacts table.
  7. Populate the index with the data.
Easy, right?
The challenges come when your scenario deviates from this very simple example.
In the next post I’ll detail some of the issues that you might come across when deviating from this simple example.

 

 

 

Tech For Good – DigitalDNA – Hackathon 2017

Standard

Hackathon Success

As part of the DigitalDNA conference in Belfast in early June a hackathon was held in the wee hours around the topic of youth unemployment.

DigitalDNA: Tech Conference Held Annually In Belfast

The challenge was to develop, in teams, a app/service/tool to help alleviate youth unemployment, within 12 hours. The prize? A trip to Dubai to present to a VC firm.

Guess what, our team won, woot!

Domain Experts

At the end of the first day of the conference, while the ‘corporates’ were milling out, the ‘have-a-go techies’ started appearing from the shadows.

Panel Discussions

The hackathon started at 4pm with some invaluable panel discussion from youth workers around the topics involved with youth engagement.

The insight learned from these discussions, and from further engagement with these domain experts, really helped to shape everyone’s thoughts around potential solutions.

Programmer Fwends!

And then, we were ready to go…….almost.

First we needed a team.

Now, I’m a decent programmer, but I know my [many] limitations. Entering a hackathon on my own was not on my radar; I wanted to learn from others experiences and skills in this 12 hour window.

Conor Graham helped to broker team alliances

Luckily, there were others in a similar position, and we quickly formed a team; myself, Luke Roantree & Hussien Elmi (we did have Samir Thapa at the start but unfortunately he had to leave early on and could not return).

From working with Luke’s father at Spatialest and hearing good reports from friends about Hussien at Deloitte, I knew we had a great team.

Loads of Time?

12 hours to bring an idea to life may seem an achievable goal at first sight, that’s only if you have a clear idea in the first place.

Distilling Ideas

Creating and distilling ideas is a time-consuming process, but fortunately the domain experts were on-hand to help. By bouncing ideas off the experts our team managed to agree on an initial direction and set to work.

Ready To Rock

With only about 8 hours to go, we were ready to rock; we were going to build an app.

Hussien Ready To Rock

Myself and Hussien had previously met at an Ionic meetup where I was speaking; using this hybrid mobile app technology was a no-brainer for us. With Ionic you can build cross platform apps very quickly; time was of the essence. Although Luke had not used the technology before we knew he was a whizz at anything he put his mind to.

The Graft

Pumped up on red bull, coffee and pizza all the teams really started getting into their stride around 11pm.

Red Bull (other sleep deprivation agents are available)

After the original chatter in the early evening all the teams were furiously coding away. With the realization rapidly dawning that less than 5 hours were left on the clock, the teams had partitioned out their work and were now working in silos.

Bed!

Funny thing about coffee and Red Bull, what goes up, must come down.

Around 2am the effects of the long day and caffeine started taking its toll. Dreary eyed developers roamed the conference space and focus started shifting away from computer screens to thoughts of bed.

After 3am very few people were left and eventually even our team decided to call it a ‘day’.

After gathering up as many free cupcakes, cold pizza, beer and crisps as we could humanly carry (admittedly it was a lot!), we started to make our way home on foot. We must have looked a random bunch on the Ormeau Road at 3:30am, but to my surprise, not many people batted an eyelid at 3 geeks laden with that many munchies at that time in the morning, go figure!?!?

Presentation Time

The presentations were to take place in the afternoon of the 2nd day of DigitalDNA conference.

Whats my potential hybrid App

By this stage our app had quite a polished feel. Working with collaboration tools such as Trello, GitHub and Slack we had worked well as a team and had managed to produce an immense amount of output in a short period of time.

I had my daughter’s sports day to attend so it was up to the Luke and Hussien to present. They both knocked it out of the park!

The presentation was flawless and the demo was impressive. The judges were impressed not only with how polished the app was but also that the domain experts views’ had been taken on board.

Luke And Hussien Recognized for their hard work

In the end credit must go to all the teams. Every team worked hard on trying to find ways to alleviate the globally transferable issue of youth unemployment.

Kudos also to everyone involved in making the DigitalDNA conference happen. The conference brought together all that is good about the tech scene in Northern Ireland and beyond.

Big thanks also to the organizers and sponsors of the hackathon; it was really well run and we enjoyed every minute of it (even at 3.30 in the morning).

Hopefully all goes well in Dubai with our presentation to Falcon & Associates in November. However, with such a capable team and great mentors from the HackForGood team, we won’t disappoint.

Scaling WordPress

Standard

*In this post I will attempt describe how you can prepare your WordPress site so that it can be scaled.

WordPress is generally not scalable

The usual deployment scenario is to have the database, core files, user files and web server, all on the same server.

In this scenario, the only scaling that can be performed, is to keep adding memory and power to a single server (vertical scaling).

Scaling Options

‘Scaling vertically’, which is also called ‘scaling up’, usually requires downtime while new resources are being added and encounter hardware limits. When Amazon RDS customers need to scale vertically, for example, they can switch from a smaller to a bigger machine, but Amazon’s largest RDS instance has only 68 GB of memory.

‘Scaling horizontally’ means that you add scale by adding more machines to your pool of resources. With horizontal scaling it is often easier to scale dynamically by adding more machines into the existing pool.

Horizontal vs Vertical Scaling

Horizontal Scaling For WordPress

To enable horizontal scaling for WordPress we need to first decouple the various parts from each other (web server, files and database). Once decoupled, we can place each part on separate servers and scale them as necessary.

Decouple Before Scaling

Decoupling the database is ‘easy’. You can easily move your WordPress database to its own server by taking a backup and migrating (guess what, there’s a plugin for that). Once moved, you just need to change the ‘DB_HOST’ value in your wp-config file.

Decoupling the web server from the files (WordPress core files and media library files) is more complicated. From what I can gather there are 2 options for doing this:

  1. Host only your library files on a remote cloud server. This article explains how to achieve this.
  2. Host both your core files and library files on a mounted volume linked to a remote cloud server. This article explains how to link your uploads folder to cloud storage, but you equally link your entire WordPress folder.

I prefer option 2 because:

  • Option 1 keeps your core files coupled to the web server.
  • It also only caters for new uploads, not existing files too.
  • In addition, it involves rewriting the url links for your files.
  • Option 2 keeps your url links the same so you revert back or change cloud provider easier.

However option 2 is way more complicated to set up. In my next post I will show how Docker can help with this process.

Decoupling complete, now scale!

You can now scale your WordPress site.

In the next post I will show how to use Docker, in particular Docker Cloud, to make this process painless.

Ionic 2 First Impressions

Standard

Even More Productivity?

After experiencing the immense productivity gains from using Ionic for mobile development, surely using Ionic 2 would be even more productive, right?!!?.

Sadly. No. Not yet, at least.

I had hoped to be writing a post telling everyone how awesome Ionic 2 is.

I had hoped to be falling over myself to tell you about how quickly it was to get started, the incredible workflow, the intuitiveness of the framework, the terseness of the code.

Instead, what follows is a moan colored by disappointment and frustration from using Ionic 2, Typescript & Angular 2.

angular-2-ionic-android

Ionic, Angular 2 And Typescript

Ionic is a cross platform mobile app framework. It uses Html & JavaScript (Angular) to create mobile apps that can be run on Android and/or IOs.

Ionic 2 is more of the same, but with a few notable differences. You will, inevitably, be using Angular 2 and Typescript to create your apps.

Angular is a JavaScript framework which makes creating single page web apps easier than just using plain JavaScript. Angular 2 is google’s second attempt at the framework, and a significant departure from the first.

Typescript is a superscript of JavaScript. It allows you to write tidier, more elegant, maintainable, JavaScript.

Now, both Angular 2 and Typescript are great in my book, but they are ‘new’. This leads me to my first gripe about the Ionic 2 stack, the learning curve:

learning-curve

Gripe 1: The Learning Curve

There are several steep learning curves involved with the Ionic 2 stack.

Typescript is a big departure from vanilla JavaScript. The syntax should not be alien to anyone who currently uses ES6/ES2015 but it will still take some getting used to.

The other steep learning curve is with Angular 2. Although the syntax in Angular 2 is more elegant and easier to learn than Angular 1, they both have steep learning curves.

Also, there are also changes with Ionic 2 which are a departure from Ionic 1 which you must learn. However these changes are not as severe as the changes in the Angular 2 and using Typescript.

 

beta-meme-generator-this-is-beta

Gripe 2: Beta

Ionic 2 is a beta.

Angular 2 is just out of beta.

Typescript is relatively new.

If you regularly program via stack overflow ‘copy and paste’ you will be frustrated by the lack of help and the ‘out-of-date’ help that you find.

slow-computer_o_1349099

Gripe 3: The Showstopper: Transpiling & Bundling

Ok, here’s the thing that’s eating me most: transpiling & bundling.

This is not a gripe with Ionic 2 per se; it is a gripe about modern web development.

Before, with Ionic 1, using the web tools to debug your app, feedback was instant. You made a change to your code and the changes were reflected immediately on screen.

Now, with Ionic 2, when you save, all your code needs compiled from typescript to something the browser can understand. On top of that, Ionic 2 now bundles your application for use in the browser every time a change, no matter how small, is made.

The upshot, is that when you save even a small project, the wait before you get feedback is greatly increased. I have spent my time waiting 30 seconds upwards waiting for a simple change in my JavaScript to take effect on screen!.

(Maybe I am using it wrong? Please, please, correct me if I am.)

This is a BIG problem for me. The thing I liked most about Ionic 1 (and web development in general) was how immediate the feedback loop was.

Things were so much more productive in the web world compared to using compiled languages. Working with a scripting language had its issues, but it felt so productive not having to wait on the compiler to finish.

Transpilers & bundlers have added a compile step to web development which affects productivity.

opinion

Largely due to my opinion on the frustrating feedback loop with Ionic 2, I will not be using Ionic 2 for new projects yet.

Sure, there are a lot of benefits from using Typescript to create a maintainable code base and productivity gains from using using Angular 2, but these are currently outweighed by the negatives.

Ionic 1 is still more productive for me and I will be continuing to use it in the near future.

As always, this is just my opinion. If I am missing something obvious, or have got this horribly wrong, please correct me in the comments below….

WordPress REST API – Part 3 – Ionic 1

Standard

Code Run Through

This post follows on from posts about hybrid mobile app development using the new WordPress REST API and Ionic: Part 1, Part 2

What follows is a run through of the import parts of the code, for the application shown in Part 2.

By now you should have a [very basic] Ionic app running in your browser. The app will allow you to:

  • Log into your remote, WordPress REST API enabled, website.
  • Make a post from the mobile app to your WordPress site.

Pretty simple stuff.

The code that performs the magic is pretty simple too.

*I have created both an Ionic 1 and Ionic 3 repo for the App code. However, below I describe the structure for the Ionic 1 repo only.

Ionic Project Structure (Ionic 1)

Ionic 1 Folder Structure

Ionic Folder Stucture

As you can see from the folder structure below there are quite a few folders in our Ionic App.

However, the important files and folders are as follows:

  • www/js: This folder contains all the Javascript code for the app.
  • www/js/app.js: The main entry point for the app. Things like routes are configured here.
  • www/js/controllers.js: The logic behind what happens on the screens is placed inside this file e.g. what should happen when the login button is pressed.
  • www/js/services.js: Reusable services to perform business logic e.g. authentication and connection to WordPress
  • www/templates: This folder contains the html for screen in the app.
  • ionic.config.json: This file contains configuration options for the app. In Part 2, we changed a setting in this file to point to our WordPress site.

The folder structure is much like any other Angular application, so we will head straight to the code to see what the key lines are:

App.js

In app.js we configure the Angular app before it runs.

In this file we state:

  • What our routes are i.e. what are the urls for our app, what html should be loaded and what Javascript files should control the html.
// Ionic uses AngularUI Router which uses the concept of states
// Learn more here: https://github.com/angular-ui/ui-router
// Set up the various states which the app can be in.
// Each state's controller can be found in controllers.js
 $stateProvider
 .state('login', {
 url: '/login',
 templateUrl: 'templates/login.html',
 controller: 'LoginCtrl'
 })
 .state('report', {
 url: '/report',
 templateUrl: 'templates/report.html',
 controller: 'ReportCtrl'
 })
 // if none of the above states are matched, use this as the fallback
 $urlRouterProvider.otherwise('/login');
  • Any injectors/interceptors we want to use. We will see later that, when we get an authentication token, we want this to be automatically passed to every subsequent HTTP call we make. An interceptor is a way of achieving this.

Controllers.js

The controllers.js file contains the logic for the Login and Report screens.

The LoginController has one function: try to login when the user clicks the Login button

$scope.login = function () {
 // contact our login service with the data from the username and password fields
 LoginService.loginUser($scope.data.username, $scope.data.password).then(function (data) {
 // if it is a success, go to the Report screen
 $state.go('report');
 }, function (data) {
 // if there is an error pop it up onscreen
 var alertPopup = $ionicPopup.alert({
 title: 'Login failed!',
 template: 'Please check your credentials!'
 });
 });
 }

 

The ReportController has a single function too: try to post the score and report data to WordPress (though our WordPress service).

$scope.createReport = function () {
 // show a saving... message while we contact the service
 $ionicLoading.show({
 template: 'Saving...'
 });
 // pass through the values from the score and report fields to the service
 WordPressService.createReport($scope.data.score, $scope.data.report).then(success, failure);
 }

Both controllers are very simple, it is in the Services.js where the real work is performed.

Services.js

The services.js is the work horse of the app, it contains 3x services:

  • LoginService. This service is used to contact a WordPress REST API end point and request a JWT authentication token.
// the important bit, contact the end point and ask for a token
 $http.post('/server/wp-json/jwt-auth/v1/token', data).error(function (error) {
 failure(error);
 }).success(function (data) {
 // you are now logged in, save to session storage, the auth interceptor will pick up
 // and add to each request
 $window.sessionStorage.token = data.token;
 success(data);
 });
  • WordPressService. The WordPress service contacts our WordPress REST Api and tries to create a post.
var data = {
 title: score,
 excerpt: report,
 content: report,
 status: 'publish'
 };
 // the important bit, make a request to the server to create a new post
 // The Authentication header will be added to the request automatically by our Interceptor service
 $http.post('/server/wp-json/wp/v2/posts', data).error(function (error) {
 deferred.reject(error);
 }).success(function (data) {
 deferred.resolve(data);
 });
  • AuthInterceptor. The interceptor service checks localstorage to see if we were given a token, if we have, then it adds it to every Http request.
request: function (config) {
 config.headers = config.headers || {};
 //if there is a token, add it to the request
 if ($window.sessionStorage.token) {
 config.headers.Authorization = 'Bearer ' + $window.sessionStorage.token;
 }
 return config;
 }

Any Questions?

That’s basically all there is to it.

If you have any questions, or any amendments that I can make to the Github repo, then please comment below….